Sep 6 00:20:55.939103 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:20:55.939131 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:55.939144 kernel: BIOS-provided physical RAM map: Sep 6 00:20:55.939151 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:20:55.939158 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:20:55.939164 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:20:55.939172 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 6 00:20:55.939179 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 6 00:20:55.939188 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:20:55.939194 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:20:55.939201 kernel: NX (Execute Disable) protection: active Sep 6 00:20:55.939208 kernel: SMBIOS 2.8 present. Sep 6 00:20:55.939215 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 6 00:20:55.939222 kernel: Hypervisor detected: KVM Sep 6 00:20:55.939230 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:20:55.939240 kernel: kvm-clock: cpu 0, msr 3d19f001, primary cpu clock Sep 6 00:20:55.939248 kernel: kvm-clock: using sched offset of 3849099796 cycles Sep 6 00:20:55.939256 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:20:55.939266 kernel: tsc: Detected 2494.140 MHz processor Sep 6 00:20:55.939274 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:20:55.939282 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:20:55.939289 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 6 00:20:55.939296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:20:55.939306 kernel: ACPI: Early table checksum verification disabled Sep 6 00:20:55.939314 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 6 00:20:55.939321 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939328 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939336 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939343 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 6 00:20:55.939350 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939358 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939365 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939375 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:55.939382 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 6 00:20:55.939389 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 6 00:20:55.939396 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 6 00:20:55.939415 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 6 00:20:55.939422 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 6 00:20:55.939429 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 6 00:20:55.939437 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 6 00:20:55.939451 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:20:55.939459 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:20:55.939467 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:20:55.939478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 00:20:55.939521 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 6 00:20:55.939529 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 6 00:20:55.939542 kernel: Zone ranges: Sep 6 00:20:55.939550 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:20:55.939558 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 6 00:20:55.939566 kernel: Normal empty Sep 6 00:20:55.939574 kernel: Movable zone start for each node Sep 6 00:20:55.939581 kernel: Early memory node ranges Sep 6 00:20:55.939589 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:20:55.939597 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 6 00:20:55.939605 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 6 00:20:55.939616 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:20:55.939628 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:20:55.939636 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 6 00:20:55.939644 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:20:55.939652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:20:55.939660 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:20:55.939668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:20:55.939676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:20:55.939684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:20:55.939694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:20:55.939705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:20:55.939713 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:20:55.939721 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:20:55.939729 kernel: TSC deadline timer available Sep 6 00:20:55.939737 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:20:55.939745 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 6 00:20:55.939752 kernel: Booting paravirtualized kernel on KVM Sep 6 00:20:55.939760 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:20:55.939771 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:20:55.939779 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:20:55.939787 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:20:55.939795 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:20:55.939809 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 6 00:20:55.939817 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 6 00:20:55.939825 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 6 00:20:55.939835 kernel: Policy zone: DMA32 Sep 6 00:20:55.939845 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:55.939856 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:20:55.939864 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:20:55.939872 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:20:55.939880 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:20:55.939889 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 6 00:20:55.939897 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:20:55.939905 kernel: Kernel/User page tables isolation: enabled Sep 6 00:20:55.939913 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:20:55.939923 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:20:55.939931 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:20:55.939940 kernel: rcu: RCU event tracing is enabled. Sep 6 00:20:55.939948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:20:55.939956 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:20:55.939967 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:20:55.939979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:20:55.939991 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:20:55.940002 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:20:55.940017 kernel: random: crng init done Sep 6 00:20:55.940029 kernel: Console: colour VGA+ 80x25 Sep 6 00:20:55.940041 kernel: printk: console [tty0] enabled Sep 6 00:20:55.940053 kernel: printk: console [ttyS0] enabled Sep 6 00:20:55.940061 kernel: ACPI: Core revision 20210730 Sep 6 00:20:55.940069 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:20:55.940077 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:20:55.940085 kernel: x2apic enabled Sep 6 00:20:55.940093 kernel: Switched APIC routing to physical x2apic. Sep 6 00:20:55.940102 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:20:55.940117 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:20:55.940129 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 6 00:20:55.940146 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 00:20:55.940157 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 00:20:55.940169 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:20:55.940182 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:20:55.940191 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:20:55.940199 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 00:20:55.940210 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:20:55.940227 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:20:55.940235 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:20:55.940246 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:20:55.940255 kernel: active return thunk: its_return_thunk Sep 6 00:20:55.940263 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:20:55.940271 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:20:55.940280 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:20:55.940288 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:20:55.940297 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:20:55.940308 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:20:55.940318 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:20:55.940327 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:20:55.940336 kernel: LSM: Security Framework initializing Sep 6 00:20:55.940344 kernel: SELinux: Initializing. Sep 6 00:20:55.940352 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:20:55.940361 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:20:55.940372 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 6 00:20:55.940381 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 6 00:20:55.940390 kernel: signal: max sigframe size: 1776 Sep 6 00:20:55.940407 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:20:55.940434 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:20:55.940443 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:20:55.940451 kernel: x86: Booting SMP configuration: Sep 6 00:20:55.940460 kernel: .... node #0, CPUs: #1 Sep 6 00:20:55.940468 kernel: kvm-clock: cpu 1, msr 3d19f041, secondary cpu clock Sep 6 00:20:55.940487 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 6 00:20:55.940496 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:20:55.940505 kernel: smpboot: Max logical packages: 1 Sep 6 00:20:55.940513 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 6 00:20:55.940522 kernel: devtmpfs: initialized Sep 6 00:20:55.940530 kernel: x86/mm: Memory block size: 128MB Sep 6 00:20:55.940539 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:20:55.940547 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:20:55.940556 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:20:55.940567 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:20:55.940578 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:20:55.940586 kernel: audit: type=2000 audit(1757118055.175:1): state=initialized audit_enabled=0 res=1 Sep 6 00:20:55.940595 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:20:55.940605 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:20:55.940614 kernel: cpuidle: using governor menu Sep 6 00:20:55.940625 kernel: ACPI: bus type PCI registered Sep 6 00:20:55.940633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:20:55.940642 kernel: dca service started, version 1.12.1 Sep 6 00:20:55.940653 kernel: PCI: Using configuration type 1 for base access Sep 6 00:20:55.940661 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:20:55.940670 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:20:55.940678 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:20:55.940687 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:20:55.940695 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:20:55.940704 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:20:55.940712 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:20:55.940720 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:20:55.940731 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:20:55.940740 kernel: ACPI: Interpreter enabled Sep 6 00:20:55.940748 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:20:55.940757 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:20:55.940771 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:20:55.940782 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:20:55.940792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:20:55.941029 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:20:55.941140 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:20:55.941152 kernel: acpiphp: Slot [3] registered Sep 6 00:20:55.941161 kernel: acpiphp: Slot [4] registered Sep 6 00:20:55.941170 kernel: acpiphp: Slot [5] registered Sep 6 00:20:55.941178 kernel: acpiphp: Slot [6] registered Sep 6 00:20:55.941187 kernel: acpiphp: Slot [7] registered Sep 6 00:20:55.941195 kernel: acpiphp: Slot [8] registered Sep 6 00:20:55.941204 kernel: acpiphp: Slot [9] registered Sep 6 00:20:55.941212 kernel: acpiphp: Slot [10] registered Sep 6 00:20:55.941224 kernel: acpiphp: Slot [11] registered Sep 6 00:20:55.941232 kernel: acpiphp: Slot [12] registered Sep 6 00:20:55.941241 kernel: acpiphp: Slot [13] registered Sep 6 00:20:55.941249 kernel: acpiphp: Slot [14] registered Sep 6 00:20:55.941258 kernel: acpiphp: Slot [15] registered Sep 6 00:20:55.941266 kernel: acpiphp: Slot [16] registered Sep 6 00:20:55.941274 kernel: acpiphp: Slot [17] registered Sep 6 00:20:55.941283 kernel: acpiphp: Slot [18] registered Sep 6 00:20:55.941291 kernel: acpiphp: Slot [19] registered Sep 6 00:20:55.941302 kernel: acpiphp: Slot [20] registered Sep 6 00:20:55.941311 kernel: acpiphp: Slot [21] registered Sep 6 00:20:55.941319 kernel: acpiphp: Slot [22] registered Sep 6 00:20:55.941327 kernel: acpiphp: Slot [23] registered Sep 6 00:20:55.941389 kernel: acpiphp: Slot [24] registered Sep 6 00:20:55.941410 kernel: acpiphp: Slot [25] registered Sep 6 00:20:55.941419 kernel: acpiphp: Slot [26] registered Sep 6 00:20:55.941427 kernel: acpiphp: Slot [27] registered Sep 6 00:20:55.941436 kernel: acpiphp: Slot [28] registered Sep 6 00:20:55.941445 kernel: acpiphp: Slot [29] registered Sep 6 00:20:55.941456 kernel: acpiphp: Slot [30] registered Sep 6 00:20:55.941464 kernel: acpiphp: Slot [31] registered Sep 6 00:20:55.941479 kernel: PCI host bridge to bus 0000:00 Sep 6 00:20:55.941597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:20:55.941693 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:20:55.941775 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:20:55.941856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:20:55.941938 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 6 00:20:55.942016 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:20:55.942162 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:20:55.942277 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:20:55.942383 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 6 00:20:55.942487 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 6 00:20:55.942580 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 6 00:20:55.942672 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 6 00:20:55.942762 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 6 00:20:55.942850 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 6 00:20:55.943007 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 6 00:20:55.943098 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 6 00:20:55.943219 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:20:55.943369 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 6 00:20:55.943479 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 6 00:20:55.943593 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 6 00:20:55.943683 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 6 00:20:55.943786 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 6 00:20:55.943874 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 6 00:20:55.943966 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 6 00:20:55.944058 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:20:55.944190 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:55.944288 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 6 00:20:55.944381 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 6 00:20:55.947773 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 6 00:20:55.947917 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:55.948069 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 6 00:20:55.948212 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 6 00:20:55.948315 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 6 00:20:55.948478 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 6 00:20:55.948593 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 6 00:20:55.948733 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 6 00:20:55.948827 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 6 00:20:55.948937 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:55.949040 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:20:55.949136 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 6 00:20:55.949223 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 6 00:20:55.949328 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:55.949476 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 6 00:20:55.949582 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 6 00:20:55.949705 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 6 00:20:55.949812 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 6 00:20:55.949913 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 6 00:20:55.950030 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 6 00:20:55.950048 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:20:55.950062 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:20:55.950076 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:20:55.950096 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:20:55.950108 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:20:55.950123 kernel: iommu: Default domain type: Translated Sep 6 00:20:55.950132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:20:55.950245 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 6 00:20:55.950335 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:20:55.950581 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 6 00:20:55.950606 kernel: vgaarb: loaded Sep 6 00:20:55.950615 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:20:55.950630 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:20:55.950639 kernel: PTP clock support registered Sep 6 00:20:55.950647 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:20:55.950656 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:20:55.950665 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:20:55.950673 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 6 00:20:55.950682 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:20:55.950691 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:20:55.950699 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:20:55.950711 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:20:55.950720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:20:55.950728 kernel: pnp: PnP ACPI init Sep 6 00:20:55.950737 kernel: pnp: PnP ACPI: found 4 devices Sep 6 00:20:55.950745 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:20:55.950754 kernel: NET: Registered PF_INET protocol family Sep 6 00:20:55.950763 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:20:55.950776 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:20:55.950788 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:20:55.950797 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:20:55.950806 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:20:55.950815 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:20:55.950824 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:20:55.950832 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:20:55.950841 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:20:55.950849 kernel: NET: Registered PF_XDP protocol family Sep 6 00:20:55.950948 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:20:55.951031 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:20:55.951118 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:20:55.951229 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:20:55.951321 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 6 00:20:55.952530 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 6 00:20:55.952653 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:20:55.952756 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:20:55.952768 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:20:55.952865 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 37244 usecs Sep 6 00:20:55.952877 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:20:55.952886 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:20:55.952895 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:20:55.952904 kernel: Initialise system trusted keyrings Sep 6 00:20:55.952913 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:20:55.952922 kernel: Key type asymmetric registered Sep 6 00:20:55.952931 kernel: Asymmetric key parser 'x509' registered Sep 6 00:20:55.952939 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:20:55.952951 kernel: io scheduler mq-deadline registered Sep 6 00:20:55.952959 kernel: io scheduler kyber registered Sep 6 00:20:55.952968 kernel: io scheduler bfq registered Sep 6 00:20:55.952976 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:20:55.952985 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 6 00:20:55.952994 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:20:55.953003 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:20:55.953011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:20:55.953020 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:20:55.953031 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:20:55.953040 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:20:55.953048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:20:55.953057 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:20:55.953194 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 00:20:55.953283 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 00:20:55.953394 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T00:20:55 UTC (1757118055) Sep 6 00:20:55.954629 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 00:20:55.954644 kernel: intel_pstate: CPU model not supported Sep 6 00:20:55.954654 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:20:55.954663 kernel: Segment Routing with IPv6 Sep 6 00:20:55.954672 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:20:55.954681 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:20:55.954690 kernel: Key type dns_resolver registered Sep 6 00:20:55.954699 kernel: IPI shorthand broadcast: enabled Sep 6 00:20:55.954708 kernel: sched_clock: Marking stable (706001933, 119221017)->(971417976, -146195026) Sep 6 00:20:55.954716 kernel: registered taskstats version 1 Sep 6 00:20:55.954728 kernel: Loading compiled-in X.509 certificates Sep 6 00:20:55.954737 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:20:55.954746 kernel: Key type .fscrypt registered Sep 6 00:20:55.954754 kernel: Key type fscrypt-provisioning registered Sep 6 00:20:55.954763 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:20:55.954772 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:20:55.954780 kernel: ima: No architecture policies found Sep 6 00:20:55.954789 kernel: clk: Disabling unused clocks Sep 6 00:20:55.954800 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:20:55.954809 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:20:55.954818 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:20:55.954827 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:20:55.954835 kernel: Run /init as init process Sep 6 00:20:55.954845 kernel: with arguments: Sep 6 00:20:55.954881 kernel: /init Sep 6 00:20:55.954893 kernel: with environment: Sep 6 00:20:55.954904 kernel: HOME=/ Sep 6 00:20:55.954919 kernel: TERM=linux Sep 6 00:20:55.954928 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:20:55.954941 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:55.954952 systemd[1]: Detected virtualization kvm. Sep 6 00:20:55.954962 systemd[1]: Detected architecture x86-64. Sep 6 00:20:55.954971 systemd[1]: Running in initrd. Sep 6 00:20:55.954980 systemd[1]: No hostname configured, using default hostname. Sep 6 00:20:55.954989 systemd[1]: Hostname set to . Sep 6 00:20:55.955004 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:55.955021 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:20:55.955030 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:55.955043 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:55.955056 systemd[1]: Reached target paths.target. Sep 6 00:20:55.955069 systemd[1]: Reached target slices.target. Sep 6 00:20:55.955078 systemd[1]: Reached target swap.target. Sep 6 00:20:55.955088 systemd[1]: Reached target timers.target. Sep 6 00:20:55.955104 systemd[1]: Listening on iscsid.socket. Sep 6 00:20:55.955113 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:20:55.955125 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:55.955135 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:55.955164 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:55.955174 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:55.955183 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:55.955192 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:55.955204 systemd[1]: Reached target sockets.target. Sep 6 00:20:55.955214 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:55.955226 systemd[1]: Finished network-cleanup.service. Sep 6 00:20:55.955236 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:20:55.955250 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:55.955262 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:55.955271 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:55.955281 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:20:55.955290 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:55.955300 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:20:55.955309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:55.955325 systemd-journald[184]: Journal started Sep 6 00:20:55.955382 systemd-journald[184]: Runtime Journal (/run/log/journal/0bb498aaa6234fb2b2a442875bbfa783) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:20:55.930860 systemd-modules-load[185]: Inserted module 'overlay' Sep 6 00:20:55.975819 systemd[1]: Started systemd-journald.service. Sep 6 00:20:55.960083 systemd-resolved[186]: Positive Trust Anchors: Sep 6 00:20:55.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.960097 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:55.987251 kernel: audit: type=1130 audit(1757118055.973:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.987297 kernel: audit: type=1130 audit(1757118055.974:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.987311 kernel: audit: type=1130 audit(1757118055.974:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.987337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:20:55.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.960133 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:55.963276 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 6 00:20:55.992429 kernel: audit: type=1130 audit(1757118055.989:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:55.974172 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:55.974855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:55.976623 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:55.987823 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:20:55.991953 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:20:56.002672 kernel: Bridge firewalling registered Sep 6 00:20:55.998156 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 6 00:20:56.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.018422 kernel: audit: type=1130 audit(1757118056.012:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.012554 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:20:56.013916 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:20:56.024451 kernel: SCSI subsystem initialized Sep 6 00:20:56.034341 dracut-cmdline[202]: dracut-dracut-053 Sep 6 00:20:56.038014 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:56.043828 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:20:56.043910 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:20:56.043925 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:20:56.048582 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 6 00:20:56.049554 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:56.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.059655 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:56.060824 kernel: audit: type=1130 audit(1757118056.056:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.071112 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:56.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.074441 kernel: audit: type=1130 audit(1757118056.071:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.133460 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:20:56.153473 kernel: iscsi: registered transport (tcp) Sep 6 00:20:56.180529 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:20:56.180637 kernel: QLogic iSCSI HBA Driver Sep 6 00:20:56.229628 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:20:56.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.231330 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:20:56.235576 kernel: audit: type=1130 audit(1757118056.229:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.294510 kernel: raid6: avx2x4 gen() 14381 MB/s Sep 6 00:20:56.311499 kernel: raid6: avx2x4 xor() 4475 MB/s Sep 6 00:20:56.328470 kernel: raid6: avx2x2 gen() 15871 MB/s Sep 6 00:20:56.345498 kernel: raid6: avx2x2 xor() 12016 MB/s Sep 6 00:20:56.362492 kernel: raid6: avx2x1 gen() 14150 MB/s Sep 6 00:20:56.379476 kernel: raid6: avx2x1 xor() 9966 MB/s Sep 6 00:20:56.396492 kernel: raid6: sse2x4 gen() 8349 MB/s Sep 6 00:20:56.413492 kernel: raid6: sse2x4 xor() 4158 MB/s Sep 6 00:20:56.430508 kernel: raid6: sse2x2 gen() 8453 MB/s Sep 6 00:20:56.447478 kernel: raid6: sse2x2 xor() 5714 MB/s Sep 6 00:20:56.464492 kernel: raid6: sse2x1 gen() 7451 MB/s Sep 6 00:20:56.482102 kernel: raid6: sse2x1 xor() 4929 MB/s Sep 6 00:20:56.482213 kernel: raid6: using algorithm avx2x2 gen() 15871 MB/s Sep 6 00:20:56.482244 kernel: raid6: .... xor() 12016 MB/s, rmw enabled Sep 6 00:20:56.482834 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:20:56.500458 kernel: xor: automatically using best checksumming function avx Sep 6 00:20:56.616461 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:20:56.629395 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:20:56.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.631011 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:56.629000 audit: BPF prog-id=7 op=LOAD Sep 6 00:20:56.629000 audit: BPF prog-id=8 op=LOAD Sep 6 00:20:56.636429 kernel: audit: type=1130 audit(1757118056.629:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.648656 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 6 00:20:56.655999 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:56.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.660313 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:20:56.677434 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Sep 6 00:20:56.725151 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:20:56.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.727266 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:56.796274 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:56.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:56.858426 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 00:20:56.926081 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:20:56.926100 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:20:56.926237 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:20:56.926251 kernel: GPT:9289727 != 125829119 Sep 6 00:20:56.926261 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:20:56.926273 kernel: GPT:9289727 != 125829119 Sep 6 00:20:56.926284 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:20:56.926294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:56.928428 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 6 00:20:56.936317 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:20:56.936354 kernel: AES CTR mode by8 optimization enabled Sep 6 00:20:56.969433 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (430) Sep 6 00:20:56.971570 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:20:56.972757 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:20:56.977180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:20:56.978516 kernel: ACPI: bus type USB registered Sep 6 00:20:56.979952 systemd[1]: Starting disk-uuid.service... Sep 6 00:20:56.987658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:20:56.990019 disk-uuid[459]: Primary Header is updated. Sep 6 00:20:56.990019 disk-uuid[459]: Secondary Entries is updated. Sep 6 00:20:56.990019 disk-uuid[459]: Secondary Header is updated. Sep 6 00:20:57.002429 kernel: usbcore: registered new interface driver usbfs Sep 6 00:20:57.002498 kernel: libata version 3.00 loaded. Sep 6 00:20:57.002518 kernel: usbcore: registered new interface driver hub Sep 6 00:20:57.002534 kernel: usbcore: registered new device driver usb Sep 6 00:20:57.005908 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 6 00:20:57.022172 kernel: scsi host1: ata_piix Sep 6 00:20:57.022346 kernel: scsi host2: ata_piix Sep 6 00:20:57.022538 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 6 00:20:57.022558 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 6 00:20:57.018069 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:57.117190 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 6 00:20:57.182438 kernel: ehci-pci: EHCI PCI platform driver Sep 6 00:20:57.192433 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 6 00:20:57.212006 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 6 00:20:57.215130 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 6 00:20:57.215264 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 6 00:20:57.215363 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 6 00:20:57.215478 kernel: hub 1-0:1.0: USB hub found Sep 6 00:20:57.215611 kernel: hub 1-0:1.0: 2 ports detected Sep 6 00:20:57.999576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:57.999899 disk-uuid[461]: The operation has completed successfully. Sep 6 00:20:58.040441 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:20:58.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.040608 systemd[1]: Finished disk-uuid.service. Sep 6 00:20:58.053127 systemd[1]: Starting verity-setup.service... Sep 6 00:20:58.073453 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:20:58.125804 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:20:58.129082 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:20:58.131113 systemd[1]: Finished verity-setup.service. Sep 6 00:20:58.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.217472 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:20:58.214626 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:20:58.215111 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:20:58.216068 systemd[1]: Starting ignition-setup.service... Sep 6 00:20:58.219753 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:20:58.231846 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:58.231924 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:58.231945 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:58.254256 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:20:58.260513 systemd[1]: Finished ignition-setup.service. Sep 6 00:20:58.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.261826 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:20:58.365049 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:20:58.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.366000 audit: BPF prog-id=9 op=LOAD Sep 6 00:20:58.367536 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:58.400652 systemd-networkd[690]: lo: Link UP Sep 6 00:20:58.400663 systemd-networkd[690]: lo: Gained carrier Sep 6 00:20:58.401276 systemd-networkd[690]: Enumeration completed Sep 6 00:20:58.401741 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:58.402925 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 6 00:20:58.403734 systemd-networkd[690]: eth1: Link UP Sep 6 00:20:58.403739 systemd-networkd[690]: eth1: Gained carrier Sep 6 00:20:58.405527 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:58.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.406783 systemd[1]: Reached target network.target. Sep 6 00:20:58.408796 systemd[1]: Starting iscsiuio.service... Sep 6 00:20:58.412824 systemd-networkd[690]: eth0: Link UP Sep 6 00:20:58.412835 systemd-networkd[690]: eth0: Gained carrier Sep 6 00:20:58.420743 ignition[612]: Ignition 2.14.0 Sep 6 00:20:58.420760 ignition[612]: Stage: fetch-offline Sep 6 00:20:58.420866 ignition[612]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:58.420901 ignition[612]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:58.426577 ignition[612]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:58.426729 ignition[612]: parsed url from cmdline: "" Sep 6 00:20:58.427537 systemd[1]: Started iscsiuio.service. Sep 6 00:20:58.426733 ignition[612]: no config URL provided Sep 6 00:20:58.426740 ignition[612]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:58.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.429210 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:20:58.426753 ignition[612]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:58.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.430836 systemd[1]: Starting ignition-fetch.service... Sep 6 00:20:58.426759 ignition[612]: failed to fetch config: resource requires networking Sep 6 00:20:58.443295 systemd[1]: Starting iscsid.service... Sep 6 00:20:58.427463 ignition[612]: Ignition finished successfully Sep 6 00:20:58.448037 ignition[695]: Ignition 2.14.0 Sep 6 00:20:58.448052 ignition[695]: Stage: fetch Sep 6 00:20:58.449460 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:58.449460 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:20:58.449460 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:20:58.449460 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:20:58.449460 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:58.449460 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:20:58.448527 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Sep 6 00:20:58.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.448201 ignition[695]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:58.451572 systemd-networkd[690]: eth0: DHCPv4 address 143.198.64.97/20, gateway 143.198.64.1 acquired from 169.254.169.253 Sep 6 00:20:58.448226 ignition[695]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:58.454512 systemd[1]: Started iscsid.service. Sep 6 00:20:58.455941 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:20:58.460980 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:58.461917 ignition[695]: parsed url from cmdline: "" Sep 6 00:20:58.462015 ignition[695]: no config URL provided Sep 6 00:20:58.462541 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:58.463236 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:58.465008 ignition[695]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 6 00:20:58.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.476736 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:20:58.477563 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:20:58.478115 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:58.478449 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:58.480076 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:20:58.494595 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:20:58.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.496650 ignition[695]: GET result: OK Sep 6 00:20:58.496867 ignition[695]: parsing config with SHA512: e6082a9d5793622da7af9176f3561ed25e0be207f3067915b8578d0c6a66205efa4c12218e67a0468410f0f71cbcbd91c5e589f0fe4254478bc65dc0248b8f3b Sep 6 00:20:58.510762 unknown[695]: fetched base config from "system" Sep 6 00:20:58.511708 unknown[695]: fetched base config from "system" Sep 6 00:20:58.512364 unknown[695]: fetched user config from "digitalocean" Sep 6 00:20:58.513919 ignition[695]: fetch: fetch complete Sep 6 00:20:58.514556 ignition[695]: fetch: fetch passed Sep 6 00:20:58.515194 ignition[695]: Ignition finished successfully Sep 6 00:20:58.517878 systemd[1]: Finished ignition-fetch.service. Sep 6 00:20:58.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.520033 systemd[1]: Starting ignition-kargs.service... Sep 6 00:20:58.537552 ignition[715]: Ignition 2.14.0 Sep 6 00:20:58.537570 ignition[715]: Stage: kargs Sep 6 00:20:58.537817 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:58.537853 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:58.541470 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:58.544561 ignition[715]: kargs: kargs passed Sep 6 00:20:58.544678 ignition[715]: Ignition finished successfully Sep 6 00:20:58.546288 systemd[1]: Finished ignition-kargs.service. Sep 6 00:20:58.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.548729 systemd[1]: Starting ignition-disks.service... Sep 6 00:20:58.566092 ignition[721]: Ignition 2.14.0 Sep 6 00:20:58.566110 ignition[721]: Stage: disks Sep 6 00:20:58.566362 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:58.566396 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:58.569820 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:58.572529 ignition[721]: disks: disks passed Sep 6 00:20:58.572620 ignition[721]: Ignition finished successfully Sep 6 00:20:58.573883 systemd[1]: Finished ignition-disks.service. Sep 6 00:20:58.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.574943 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:20:58.575502 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:58.576321 systemd[1]: Reached target local-fs.target. Sep 6 00:20:58.577175 systemd[1]: Reached target sysinit.target. Sep 6 00:20:58.578059 systemd[1]: Reached target basic.target. Sep 6 00:20:58.580720 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:20:58.602478 systemd-fsck[729]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:20:58.608163 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:20:58.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.611550 systemd[1]: Mounting sysroot.mount... Sep 6 00:20:58.629585 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:20:58.630790 systemd[1]: Mounted sysroot.mount. Sep 6 00:20:58.632160 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:20:58.635226 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:20:58.638020 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 6 00:20:58.641910 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:20:58.643641 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:20:58.644645 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:20:58.648770 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:20:58.653672 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:20:58.667644 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:20:58.680853 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:20:58.694746 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:20:58.708302 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:20:58.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.819431 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:20:58.822683 systemd[1]: Starting ignition-mount.service... Sep 6 00:20:58.825211 systemd[1]: Starting sysroot-boot.service... Sep 6 00:20:58.840840 coreos-metadata[735]: Sep 06 00:20:58.840 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:58.851011 bash[787]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:20:58.860121 coreos-metadata[735]: Sep 06 00:20:58.860 INFO Fetch successful Sep 6 00:20:58.869882 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 6 00:20:58.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.870057 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 6 00:20:58.875918 coreos-metadata[736]: Sep 06 00:20:58.875 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:20:58.884000 ignition[788]: INFO : Ignition 2.14.0 Sep 6 00:20:58.884000 ignition[788]: INFO : Stage: mount Sep 6 00:20:58.885526 ignition[788]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:58.885526 ignition[788]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:58.887346 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:58.890496 coreos-metadata[736]: Sep 06 00:20:58.890 INFO Fetch successful Sep 6 00:20:58.893483 ignition[788]: INFO : mount: mount passed Sep 6 00:20:58.893483 ignition[788]: INFO : Ignition finished successfully Sep 6 00:20:58.895567 systemd[1]: Finished ignition-mount.service. Sep 6 00:20:58.896540 coreos-metadata[736]: Sep 06 00:20:58.895 INFO wrote hostname ci-3510.3.8-n-f7f83b6e50 to /sysroot/etc/hostname Sep 6 00:20:58.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.898874 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:20:58.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:58.902274 systemd[1]: Finished sysroot-boot.service. Sep 6 00:20:58.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:59.143924 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:20:59.154508 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Sep 6 00:20:59.156582 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:59.156637 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:59.156660 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:59.163691 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:20:59.166006 systemd[1]: Starting ignition-files.service... Sep 6 00:20:59.197194 ignition[816]: INFO : Ignition 2.14.0 Sep 6 00:20:59.197194 ignition[816]: INFO : Stage: files Sep 6 00:20:59.198829 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:20:59.198829 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:20:59.200657 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:20:59.204542 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:20:59.206385 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:20:59.206385 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:20:59.211295 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:20:59.212501 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:20:59.214549 unknown[816]: wrote ssh authorized keys file for user: core Sep 6 00:20:59.215582 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:20:59.216454 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:20:59.216454 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:59.374539 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:20:59.554859 systemd-networkd[690]: eth1: Gained IPv6LL Sep 6 00:20:59.571735 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:20:59.572900 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:20:59.572900 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:59.814439 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:20:59.916357 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:20:59.917242 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:59.917242 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:59.917242 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:59.917242 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:59.917242 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:59.920896 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 6 00:21:00.320502 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:21:00.386736 systemd-networkd[690]: eth0: Gained IPv6LL Sep 6 00:21:00.725680 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:21:00.725680 ignition[816]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:21:00.725680 ignition[816]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:21:00.725680 ignition[816]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:21:00.729525 ignition[816]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:21:00.735348 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:21:00.735348 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:21:00.735348 ignition[816]: INFO : files: files passed Sep 6 00:21:00.735348 ignition[816]: INFO : Ignition finished successfully Sep 6 00:21:00.748636 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:21:00.748689 kernel: audit: type=1130 audit(1757118060.739:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.738976 systemd[1]: Finished ignition-files.service. Sep 6 00:21:00.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.742124 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:21:00.758545 kernel: audit: type=1130 audit(1757118060.751:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.758598 kernel: audit: type=1131 audit(1757118060.751:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.744850 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:21:00.747857 systemd[1]: Starting ignition-quench.service... Sep 6 00:21:00.751280 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:21:00.751467 systemd[1]: Finished ignition-quench.service. Sep 6 00:21:00.761909 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:21:00.762926 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:21:00.764132 systemd[1]: Reached target ignition-complete.target. Sep 6 00:21:00.768345 kernel: audit: type=1130 audit(1757118060.763:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.769559 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:21:00.795210 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:21:00.795444 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:21:00.801163 kernel: audit: type=1130 audit(1757118060.795:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.801204 kernel: audit: type=1131 audit(1757118060.795:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.796658 systemd[1]: Reached target initrd-fs.target. Sep 6 00:21:00.801632 systemd[1]: Reached target initrd.target. Sep 6 00:21:00.802385 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:21:00.804183 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:21:00.822443 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:21:00.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.824106 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:21:00.829243 kernel: audit: type=1130 audit(1757118060.822:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.836170 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:21:00.836695 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:21:00.837576 systemd[1]: Stopped target timers.target. Sep 6 00:21:00.838270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:21:00.841842 kernel: audit: type=1131 audit(1757118060.838:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.838458 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:21:00.838969 systemd[1]: Stopped target initrd.target. Sep 6 00:21:00.849217 systemd[1]: Stopped target basic.target. Sep 6 00:21:00.849900 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:21:00.850610 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:21:00.851504 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:21:00.852107 systemd[1]: Stopped target remote-fs.target. Sep 6 00:21:00.852745 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:21:00.853484 systemd[1]: Stopped target sysinit.target. Sep 6 00:21:00.854118 systemd[1]: Stopped target local-fs.target. Sep 6 00:21:00.854759 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:21:00.855303 systemd[1]: Stopped target swap.target. Sep 6 00:21:00.859392 kernel: audit: type=1131 audit(1757118060.856:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.855871 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:21:00.855996 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:21:00.862927 kernel: audit: type=1131 audit(1757118060.860:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.856693 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:21:00.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.859740 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:21:00.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.859872 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:21:00.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.860590 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:21:00.860702 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:21:00.863431 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:21:00.863554 systemd[1]: Stopped ignition-files.service. Sep 6 00:21:00.864021 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:21:00.873116 iscsid[700]: iscsid shutting down. Sep 6 00:21:00.864124 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:21:00.866005 systemd[1]: Stopping ignition-mount.service... Sep 6 00:21:00.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.866786 systemd[1]: Stopping iscsid.service... Sep 6 00:21:00.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.873870 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:21:00.874361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:21:00.874561 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:21:00.875216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:21:00.875362 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:21:00.877684 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:21:00.877836 systemd[1]: Stopped iscsid.service. Sep 6 00:21:00.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.892280 ignition[854]: INFO : Ignition 2.14.0 Sep 6 00:21:00.892280 ignition[854]: INFO : Stage: umount Sep 6 00:21:00.892280 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:21:00.892280 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:21:00.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.886565 systemd[1]: Stopping iscsiuio.service... Sep 6 00:21:00.896762 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:21:00.888778 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:21:00.899125 ignition[854]: INFO : umount: umount passed Sep 6 00:21:00.899125 ignition[854]: INFO : Ignition finished successfully Sep 6 00:21:00.891280 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:21:00.891973 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:21:00.892068 systemd[1]: Stopped iscsiuio.service. Sep 6 00:21:00.900489 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:21:00.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.900645 systemd[1]: Stopped ignition-mount.service. Sep 6 00:21:00.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.901392 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:21:00.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.901506 systemd[1]: Stopped ignition-disks.service. Sep 6 00:21:00.902048 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:21:00.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.902105 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:21:00.902760 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:21:00.902814 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:21:00.903384 systemd[1]: Stopped target network.target. Sep 6 00:21:00.904063 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:21:00.904128 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:21:00.904850 systemd[1]: Stopped target paths.target. Sep 6 00:21:00.905770 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:21:00.907495 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:21:00.907982 systemd[1]: Stopped target slices.target. Sep 6 00:21:00.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.908629 systemd[1]: Stopped target sockets.target. Sep 6 00:21:00.909233 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:21:00.909286 systemd[1]: Closed iscsid.socket. Sep 6 00:21:00.910000 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:21:00.910041 systemd[1]: Closed iscsiuio.socket. Sep 6 00:21:00.910672 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:21:00.910718 systemd[1]: Stopped ignition-setup.service. Sep 6 00:21:00.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.911690 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:21:00.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.914196 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:21:00.932000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:21:00.916228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:21:00.920097 systemd-networkd[690]: eth0: DHCPv6 lease lost Sep 6 00:21:00.928598 systemd-networkd[690]: eth1: DHCPv6 lease lost Sep 6 00:21:00.937000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:21:00.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.929029 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:21:00.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.929163 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:21:00.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.930301 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:21:00.930418 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:21:00.931285 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:21:00.931332 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:21:00.934203 systemd[1]: Stopping network-cleanup.service... Sep 6 00:21:00.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.934961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:21:00.935060 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:21:00.938481 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:21:00.938572 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:21:00.939691 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:21:00.939768 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:21:00.949611 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:21:00.956988 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:21:00.957789 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:21:00.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.957960 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:21:00.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.978813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:21:00.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.982472 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:21:00.983329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:21:00.983396 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:21:00.984062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:21:00.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.984145 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:21:00.985139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:21:00.985204 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:21:00.985898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:21:00.985960 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:21:01.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.987950 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:21:01.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.988838 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:21:01.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.988939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:21:01.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.989938 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:21:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:01.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:00.990010 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:21:01.000652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:21:01.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:01.000743 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:21:01.003138 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:21:01.004101 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:21:01.004258 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:21:01.011077 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:21:01.011219 systemd[1]: Stopped network-cleanup.service. Sep 6 00:21:01.012156 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:21:01.012288 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:21:01.013150 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:21:01.013747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:21:01.013827 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:21:01.016136 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:21:01.034752 systemd[1]: Switching root. Sep 6 00:21:01.056800 systemd-journald[184]: Journal stopped Sep 6 00:21:04.907889 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 6 00:21:04.907991 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:21:04.908011 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:21:04.908029 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:21:04.908045 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:21:04.908062 kernel: SELinux: policy capability open_perms=1 Sep 6 00:21:04.908074 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:21:04.908111 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:21:04.908128 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:21:04.908140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:21:04.908154 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:21:04.908175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:21:04.908188 systemd[1]: Successfully loaded SELinux policy in 46.222ms. Sep 6 00:21:04.908209 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.921ms. Sep 6 00:21:04.908223 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:21:04.908236 systemd[1]: Detected virtualization kvm. Sep 6 00:21:04.908248 systemd[1]: Detected architecture x86-64. Sep 6 00:21:04.908260 systemd[1]: Detected first boot. Sep 6 00:21:04.908275 systemd[1]: Hostname set to . Sep 6 00:21:04.908287 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:21:04.908300 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:21:04.908311 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:21:04.908328 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:04.908342 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:04.908356 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:04.908374 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:21:04.908386 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:21:04.908436 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:21:04.908461 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:21:04.908473 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:21:04.908486 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:21:04.908499 systemd[1]: Created slice system-getty.slice. Sep 6 00:21:04.908510 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:21:04.908523 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:21:04.908538 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:21:04.908555 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:21:04.908568 systemd[1]: Created slice user.slice. Sep 6 00:21:04.908580 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:21:04.908592 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:21:04.908604 systemd[1]: Set up automount boot.automount. Sep 6 00:21:04.908617 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:21:04.908632 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:21:04.908645 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:21:04.908658 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:21:04.908671 systemd[1]: Reached target integritysetup.target. Sep 6 00:21:04.908684 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:21:04.908699 systemd[1]: Reached target remote-fs.target. Sep 6 00:21:04.908711 systemd[1]: Reached target slices.target. Sep 6 00:21:04.908724 systemd[1]: Reached target swap.target. Sep 6 00:21:04.908742 systemd[1]: Reached target torcx.target. Sep 6 00:21:04.908762 systemd[1]: Reached target veritysetup.target. Sep 6 00:21:04.908775 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:21:04.908788 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:21:04.908800 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:21:04.908816 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:21:04.908838 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:21:04.908856 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:21:04.908875 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:21:04.908891 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:21:04.908913 systemd[1]: Mounting media.mount... Sep 6 00:21:04.908931 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:04.908957 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:21:04.908976 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:21:04.909035 systemd[1]: Mounting tmp.mount... Sep 6 00:21:04.909056 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:21:04.909076 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:04.909094 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:21:04.909111 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:21:04.909134 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:21:04.909157 systemd[1]: Starting modprobe@drm.service... Sep 6 00:21:04.909191 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:21:04.909209 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:21:04.909227 systemd[1]: Starting modprobe@loop.service... Sep 6 00:21:04.909246 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:21:04.909265 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:21:04.909283 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:21:04.909301 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:21:04.909325 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:21:04.909345 systemd[1]: Stopped systemd-journald.service. Sep 6 00:21:04.909377 systemd[1]: Starting systemd-journald.service... Sep 6 00:21:04.909395 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:21:04.909434 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:21:04.909467 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:21:04.909486 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:21:04.909506 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:21:04.909524 systemd[1]: Stopped verity-setup.service. Sep 6 00:21:04.909555 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:04.909593 kernel: fuse: init (API version 7.34) Sep 6 00:21:04.909614 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:21:04.909632 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:21:04.909678 systemd[1]: Mounted media.mount. Sep 6 00:21:04.909711 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:21:04.909724 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:21:04.909737 systemd[1]: Mounted tmp.mount. Sep 6 00:21:04.909752 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:21:04.909770 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:21:04.909784 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:21:04.909797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:21:04.909810 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:21:04.909825 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:21:04.909847 systemd[1]: Finished modprobe@drm.service. Sep 6 00:21:04.909866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:21:04.909880 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:21:04.909892 kernel: loop: module loaded Sep 6 00:21:04.909910 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:21:04.909923 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:21:04.909935 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:21:04.909948 systemd[1]: Finished modprobe@loop.service. Sep 6 00:21:04.909968 systemd-journald[957]: Journal started Sep 6 00:21:04.910057 systemd-journald[957]: Runtime Journal (/run/log/journal/0bb498aaa6234fb2b2a442875bbfa783) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:21:04.910102 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:21:01.202000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:21:01.266000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:21:01.266000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:21:01.266000 audit: BPF prog-id=10 op=LOAD Sep 6 00:21:01.266000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:21:01.266000 audit: BPF prog-id=11 op=LOAD Sep 6 00:21:01.267000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:21:01.380000 audit[887]: AVC avc: denied { associate } for pid=887 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:21:01.380000 audit[887]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858bc a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=870 pid=887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:01.380000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:21:01.381000 audit[887]: AVC avc: denied { associate } for pid=887 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:21:01.381000 audit[887]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000185995 a2=1ed a3=0 items=2 ppid=870 pid=887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:01.381000 audit: CWD cwd="/" Sep 6 00:21:01.381000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:01.381000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:01.381000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:21:04.706000 audit: BPF prog-id=12 op=LOAD Sep 6 00:21:04.706000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:21:04.706000 audit: BPF prog-id=13 op=LOAD Sep 6 00:21:04.706000 audit: BPF prog-id=14 op=LOAD Sep 6 00:21:04.706000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:21:04.706000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:21:04.707000 audit: BPF prog-id=15 op=LOAD Sep 6 00:21:04.707000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:21:04.707000 audit: BPF prog-id=16 op=LOAD Sep 6 00:21:04.707000 audit: BPF prog-id=17 op=LOAD Sep 6 00:21:04.707000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:21:04.707000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:21:04.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.714000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:21:04.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.824000 audit: BPF prog-id=18 op=LOAD Sep 6 00:21:04.824000 audit: BPF prog-id=19 op=LOAD Sep 6 00:21:04.824000 audit: BPF prog-id=20 op=LOAD Sep 6 00:21:04.824000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:21:04.824000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:21:04.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.899000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:21:04.899000 audit[957]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffda7bdcbd0 a2=4000 a3=7ffda7bdcc6c items=0 ppid=1 pid=957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:04.899000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:21:04.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.704409 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:21:04.912267 systemd[1]: Started systemd-journald.service. Sep 6 00:21:01.377939 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:04.704434 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:21:01.378501 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:21:04.708999 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:21:01.378527 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:21:04.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:01.378571 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:21:01.378583 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:21:01.378629 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:21:01.378644 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:21:04.913166 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:21:01.378890 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:21:01.378942 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:21:01.378957 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:21:01.380427 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:21:01.380479 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:21:01.380505 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:21:01.380521 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:21:01.380544 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:21:01.380560 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:21:04.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.254867 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:21:04.255427 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:21:04.255663 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:21:04.256004 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:21:04.256105 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:21:04.256222 /usr/lib/systemd/system-generators/torcx-generator[887]: time="2025-09-06T00:21:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:21:04.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.916654 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:21:04.918233 systemd[1]: Reached target network-pre.target. Sep 6 00:21:04.920803 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:21:04.925606 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:21:04.931883 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:21:04.934573 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:21:04.937157 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:21:04.937805 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:21:04.943332 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:21:04.944045 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:21:04.947902 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:21:04.954354 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:21:04.956550 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:21:04.970075 systemd-journald[957]: Time spent on flushing to /var/log/journal/0bb498aaa6234fb2b2a442875bbfa783 is 59.550ms for 1150 entries. Sep 6 00:21:04.970075 systemd-journald[957]: System Journal (/var/log/journal/0bb498aaa6234fb2b2a442875bbfa783) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:21:05.035939 systemd-journald[957]: Received client request to flush runtime journal. Sep 6 00:21:04.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:04.972425 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:21:04.974960 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:21:04.981510 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:21:04.982341 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:21:04.995346 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:21:05.019293 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:21:05.021815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:21:05.037298 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:21:05.085897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:21:05.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.091875 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:21:05.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.093621 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:21:05.102833 udevadm[999]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:21:05.639697 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:21:05.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.640000 audit: BPF prog-id=21 op=LOAD Sep 6 00:21:05.640000 audit: BPF prog-id=22 op=LOAD Sep 6 00:21:05.640000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:21:05.640000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:21:05.641645 systemd[1]: Starting systemd-udevd.service... Sep 6 00:21:05.663148 systemd-udevd[1000]: Using default interface naming scheme 'v252'. Sep 6 00:21:05.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.687000 audit: BPF prog-id=23 op=LOAD Sep 6 00:21:05.686054 systemd[1]: Started systemd-udevd.service. Sep 6 00:21:05.688421 systemd[1]: Starting systemd-networkd.service... Sep 6 00:21:05.695000 audit: BPF prog-id=24 op=LOAD Sep 6 00:21:05.695000 audit: BPF prog-id=25 op=LOAD Sep 6 00:21:05.695000 audit: BPF prog-id=26 op=LOAD Sep 6 00:21:05.696870 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:21:05.737184 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:21:05.743698 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:05.743910 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:05.745373 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:21:05.746959 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:21:05.750313 systemd[1]: Starting modprobe@loop.service... Sep 6 00:21:05.750713 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:21:05.750786 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:21:05.750900 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:05.751422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:21:05.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.753569 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:21:05.754676 kernel: kauditd_printk_skb: 113 callbacks suppressed Sep 6 00:21:05.754704 kernel: audit: type=1130 audit(1757118065.753:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.754251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:21:05.754389 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:21:05.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.759531 kernel: audit: type=1131 audit(1757118065.753:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.761696 kernel: audit: type=1130 audit(1757118065.759:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.760524 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:21:05.760651 systemd[1]: Finished modprobe@loop.service. Sep 6 00:21:05.767711 kernel: audit: type=1131 audit(1757118065.759:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.767794 kernel: audit: type=1130 audit(1757118065.765:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.767619 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:21:05.767664 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:21:05.768455 kernel: audit: type=1131 audit(1757118065.765:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.772294 systemd[1]: Started systemd-userdbd.service. Sep 6 00:21:05.775463 kernel: audit: type=1130 audit(1757118065.772:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.859431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:21:05.864285 systemd-networkd[1007]: lo: Link UP Sep 6 00:21:05.864297 systemd-networkd[1007]: lo: Gained carrier Sep 6 00:21:05.864818 systemd-networkd[1007]: Enumeration completed Sep 6 00:21:05.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.864908 systemd-networkd[1007]: eth1: Configuring with /run/systemd/network/10-26:5e:8c:30:ad:2f.network. Sep 6 00:21:05.864942 systemd[1]: Started systemd-networkd.service. Sep 6 00:21:05.868529 kernel: audit: type=1130 audit(1757118065.864:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:05.869523 systemd-networkd[1007]: eth1: Link UP Sep 6 00:21:05.869533 systemd-networkd[1007]: eth1: Gained carrier Sep 6 00:21:05.879428 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:21:05.879000 audit[1011]: AVC avc: denied { confidentiality } for pid=1011 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:21:05.888460 kernel: audit: type=1400 audit(1757118065.879:160): avc: denied { confidentiality } for pid=1011 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:21:05.879000 audit[1011]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d632dfb860 a1=338ec a2=7f7a3f2fbbc5 a3=5 items=110 ppid=1000 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:05.879000 audit: CWD cwd="/" Sep 6 00:21:05.879000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=1 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=2 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=3 name=(null) inode=13659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=4 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=5 name=(null) inode=13660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.911129 kernel: audit: type=1300 audit(1757118065.879:160): arch=c000003e syscall=175 success=yes exit=0 a0=55d632dfb860 a1=338ec a2=7f7a3f2fbbc5 a3=5 items=110 ppid=1000 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:05.879000 audit: PATH item=6 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=7 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=8 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=9 name=(null) inode=13662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=10 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=11 name=(null) inode=13663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=12 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=13 name=(null) inode=13664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=14 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=15 name=(null) inode=13665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=16 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=17 name=(null) inode=13666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=18 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=19 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=20 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=21 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=22 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=23 name=(null) inode=13669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=24 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=25 name=(null) inode=13670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=26 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=27 name=(null) inode=13671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=28 name=(null) inode=13667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=29 name=(null) inode=13672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=30 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=31 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=32 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=33 name=(null) inode=13674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=34 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=35 name=(null) inode=13675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=36 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=37 name=(null) inode=13676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=38 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=39 name=(null) inode=13677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=40 name=(null) inode=13673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=41 name=(null) inode=13678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=42 name=(null) inode=13658 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=43 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=44 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=45 name=(null) inode=13680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=46 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=47 name=(null) inode=13681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=48 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=49 name=(null) inode=13682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=50 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=51 name=(null) inode=13683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=52 name=(null) inode=13679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=53 name=(null) inode=13684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=55 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=56 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=57 name=(null) inode=13686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=58 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=59 name=(null) inode=13687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=60 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=61 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=62 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=63 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.910472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:21:05.879000 audit: PATH item=64 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=65 name=(null) inode=13690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=66 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=67 name=(null) inode=13691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=68 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=69 name=(null) inode=13692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=70 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=71 name=(null) inode=13693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=72 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=73 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=74 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=75 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=76 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=77 name=(null) inode=13696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=78 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=79 name=(null) inode=13697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=80 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=81 name=(null) inode=13698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=82 name=(null) inode=13694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=83 name=(null) inode=13699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=84 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=85 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=86 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=87 name=(null) inode=13701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=88 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=89 name=(null) inode=13702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=90 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=91 name=(null) inode=13703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=92 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=93 name=(null) inode=13704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=94 name=(null) inode=13700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=95 name=(null) inode=13705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=96 name=(null) inode=13685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=97 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=98 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=99 name=(null) inode=13707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=100 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=101 name=(null) inode=13708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=102 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=103 name=(null) inode=13709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=104 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=105 name=(null) inode=13710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=106 name=(null) inode=13706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=107 name=(null) inode=13711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PATH item=109 name=(null) inode=13714 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:21:05.879000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:21:05.940422 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 6 00:21:05.943439 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:21:05.948495 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:21:05.987786 systemd-networkd[1007]: eth0: Configuring with /run/systemd/network/10-86:cd:29:a9:34:37.network. Sep 6 00:21:05.988390 systemd-networkd[1007]: eth0: Link UP Sep 6 00:21:05.988412 systemd-networkd[1007]: eth0: Gained carrier Sep 6 00:21:06.080428 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:21:06.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.107213 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:21:06.109996 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:21:06.130948 lvm[1038]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:21:06.162211 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:21:06.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.162944 systemd[1]: Reached target cryptsetup.target. Sep 6 00:21:06.165209 systemd[1]: Starting lvm2-activation.service... Sep 6 00:21:06.170319 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:21:06.192919 systemd[1]: Finished lvm2-activation.service. Sep 6 00:21:06.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.193512 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:21:06.195414 systemd[1]: Mounting media-configdrive.mount... Sep 6 00:21:06.195791 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:21:06.195861 systemd[1]: Reached target machines.target. Sep 6 00:21:06.197382 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:21:06.211034 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:21:06.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.212417 kernel: ISO 9660 Extensions: RRIP_1991A Sep 6 00:21:06.213694 systemd[1]: Mounted media-configdrive.mount. Sep 6 00:21:06.214203 systemd[1]: Reached target local-fs.target. Sep 6 00:21:06.215916 systemd[1]: Starting ldconfig.service... Sep 6 00:21:06.217535 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.217616 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:06.219219 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:21:06.221882 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:21:06.227085 systemd[1]: Starting systemd-sysext.service... Sep 6 00:21:06.233646 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Sep 6 00:21:06.235082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:21:06.258661 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:21:06.275601 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:21:06.275826 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:21:06.286855 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:21:06.289827 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:21:06.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.294440 kernel: loop0: detected capacity change from 0 to 229808 Sep 6 00:21:06.336429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:21:06.355551 kernel: loop1: detected capacity change from 0 to 229808 Sep 6 00:21:06.356985 systemd-fsck[1052]: fsck.fat 4.2 (2021-01-31) Sep 6 00:21:06.356985 systemd-fsck[1052]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:21:06.359381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:21:06.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.361308 systemd[1]: Mounting boot.mount... Sep 6 00:21:06.378464 (sd-sysext)[1055]: Using extensions 'kubernetes'. Sep 6 00:21:06.378890 (sd-sysext)[1055]: Merged extensions into '/usr'. Sep 6 00:21:06.399163 systemd[1]: Mounted boot.mount. Sep 6 00:21:06.408069 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:06.411971 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:21:06.412609 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.417073 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:21:06.419748 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:21:06.421783 systemd[1]: Starting modprobe@loop.service... Sep 6 00:21:06.422735 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.422963 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:06.423180 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:06.430879 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:21:06.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.432623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:21:06.432793 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:21:06.433691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:21:06.433831 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:21:06.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.434627 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:21:06.434749 systemd[1]: Finished modprobe@loop.service. Sep 6 00:21:06.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.435599 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:21:06.435712 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.437071 systemd[1]: Finished systemd-sysext.service. Sep 6 00:21:06.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.439343 systemd[1]: Starting ensure-sysext.service... Sep 6 00:21:06.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.445070 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:21:06.446125 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:21:06.453279 systemd[1]: Reloading. Sep 6 00:21:06.464287 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:21:06.468145 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:21:06.471289 systemd-tmpfiles[1064]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:21:06.618208 /usr/lib/systemd/system-generators/torcx-generator[1083]: time="2025-09-06T00:21:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:06.618710 /usr/lib/systemd/system-generators/torcx-generator[1083]: time="2025-09-06T00:21:06Z" level=info msg="torcx already run" Sep 6 00:21:06.620112 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:21:06.718982 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:06.719230 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:06.739894 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:06.802000 audit: BPF prog-id=27 op=LOAD Sep 6 00:21:06.802000 audit: BPF prog-id=28 op=LOAD Sep 6 00:21:06.803000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:21:06.803000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:21:06.804000 audit: BPF prog-id=29 op=LOAD Sep 6 00:21:06.804000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:21:06.804000 audit: BPF prog-id=30 op=LOAD Sep 6 00:21:06.804000 audit: BPF prog-id=31 op=LOAD Sep 6 00:21:06.804000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:21:06.804000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:21:06.807000 audit: BPF prog-id=32 op=LOAD Sep 6 00:21:06.807000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:21:06.810000 audit: BPF prog-id=33 op=LOAD Sep 6 00:21:06.810000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:21:06.810000 audit: BPF prog-id=34 op=LOAD Sep 6 00:21:06.810000 audit: BPF prog-id=35 op=LOAD Sep 6 00:21:06.810000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:21:06.811000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:21:06.818665 systemd[1]: Finished ldconfig.service. Sep 6 00:21:06.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.821109 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:21:06.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.833437 systemd[1]: Starting audit-rules.service... Sep 6 00:21:06.835779 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:21:06.838878 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:21:06.840000 audit: BPF prog-id=36 op=LOAD Sep 6 00:21:06.842717 systemd[1]: Starting systemd-resolved.service... Sep 6 00:21:06.845000 audit: BPF prog-id=37 op=LOAD Sep 6 00:21:06.847444 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:21:06.849724 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:21:06.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.853667 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:21:06.856185 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.859812 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:21:06.861878 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:21:06.865949 systemd[1]: Starting modprobe@loop.service... Sep 6 00:21:06.866394 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.866563 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:06.866679 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:21:06.867575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:21:06.867713 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:21:06.870487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:21:06.870646 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:21:06.871523 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:21:06.873952 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.875570 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:21:06.877589 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:21:06.878033 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.878167 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:06.878289 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:21:06.882392 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.885176 systemd[1]: Starting modprobe@drm.service... Sep 6 00:21:06.886234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.886396 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:06.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.899000 audit[1136]: SYSTEM_BOOT pid=1136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.890111 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:21:06.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.890702 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:21:06.891999 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:21:06.892172 systemd[1]: Finished modprobe@loop.service. Sep 6 00:21:06.895472 systemd[1]: Finished ensure-sysext.service. Sep 6 00:21:06.902570 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:21:06.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.906878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:21:06.907024 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:21:06.907589 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:21:06.914579 systemd-networkd[1007]: eth1: Gained IPv6LL Sep 6 00:21:06.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.920745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:21:06.920886 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:21:06.921591 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:21:06.922037 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:21:06.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.924091 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:21:06.924222 systemd[1]: Finished modprobe@drm.service. Sep 6 00:21:06.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.954286 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:21:06.956350 systemd[1]: Starting systemd-update-done.service... Sep 6 00:21:06.968538 systemd[1]: Finished systemd-update-done.service. Sep 6 00:21:06.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:21:06.969000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:21:06.969000 audit[1159]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff62299420 a2=420 a3=0 items=0 ppid=1131 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:21:06.969000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:21:06.970004 augenrules[1159]: No rules Sep 6 00:21:06.971013 systemd[1]: Finished audit-rules.service. Sep 6 00:21:06.982565 systemd-resolved[1134]: Positive Trust Anchors: Sep 6 00:21:06.982584 systemd-resolved[1134]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:21:06.982635 systemd-resolved[1134]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:21:06.991342 systemd-resolved[1134]: Using system hostname 'ci-3510.3.8-n-f7f83b6e50'. Sep 6 00:21:06.994528 systemd[1]: Started systemd-resolved.service. Sep 6 00:21:06.994989 systemd[1]: Reached target network.target. Sep 6 00:21:06.995322 systemd[1]: Reached target network-online.target. Sep 6 00:21:06.995638 systemd[1]: Reached target nss-lookup.target. Sep 6 00:21:07.003453 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:21:07.003931 systemd[1]: Reached target sysinit.target. Sep 6 00:21:07.004333 systemd[1]: Started motdgen.path. Sep 6 00:21:07.004674 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:21:07.005029 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:21:07.005376 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:21:07.005420 systemd[1]: Reached target paths.target. Sep 6 00:21:07.005718 systemd[1]: Reached target time-set.target. Sep 6 00:21:07.006193 systemd[1]: Started logrotate.timer. Sep 6 00:21:07.006605 systemd[1]: Started mdadm.timer. Sep 6 00:21:07.006890 systemd[1]: Reached target timers.target. Sep 6 00:21:07.007585 systemd[1]: Listening on dbus.socket. Sep 6 00:21:07.009191 systemd[1]: Starting docker.socket... Sep 6 00:21:07.013445 systemd[1]: Listening on sshd.socket. Sep 6 00:21:07.014378 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:07.015894 systemd[1]: Listening on docker.socket. Sep 6 00:21:07.016717 systemd[1]: Reached target sockets.target. Sep 6 00:21:07.017059 systemd[1]: Reached target basic.target. Sep 6 00:21:07.017475 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:21:07.017510 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:21:07.018863 systemd[1]: Starting containerd.service... Sep 6 00:21:07.021215 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:21:07.023602 systemd[1]: Starting dbus.service... Sep 6 00:21:07.026051 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:21:07.029350 systemd[1]: Starting extend-filesystems.service... Sep 6 00:21:07.043892 jq[1172]: false Sep 6 00:21:07.030370 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:21:07.034240 systemd[1]: Starting kubelet.service... Sep 6 00:21:07.037104 systemd[1]: Starting motdgen.service... Sep 6 00:21:07.040607 systemd[1]: Starting prepare-helm.service... Sep 6 00:21:07.044268 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:21:07.047753 systemd[1]: Starting sshd-keygen.service... Sep 6 00:21:07.053600 systemd[1]: Starting systemd-logind.service... Sep 6 00:21:07.054018 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:21:07.054124 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:21:07.086323 jq[1184]: true Sep 6 00:21:07.054782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:21:07.055808 systemd[1]: Starting update-engine.service... Sep 6 00:21:07.091744 tar[1186]: linux-amd64/LICENSE Sep 6 00:21:07.091744 tar[1186]: linux-amd64/helm Sep 6 00:21:07.058910 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:21:07.064153 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:21:07.065015 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:21:07.067657 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:21:07.068015 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:21:07.606828 systemd-timesyncd[1135]: Contacted time server 50.218.103.254:123 (0.flatcar.pool.ntp.org). Sep 6 00:21:07.606907 systemd-timesyncd[1135]: Initial clock synchronization to Sat 2025-09-06 00:21:07.606645 UTC. Sep 6 00:21:07.608133 systemd-resolved[1134]: Clock change detected. Flushing caches. Sep 6 00:21:07.637760 dbus-daemon[1169]: [system] SELinux support is enabled Sep 6 00:21:07.638026 systemd[1]: Started dbus.service. Sep 6 00:21:07.640801 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:21:07.640832 systemd[1]: Reached target system-config.target. Sep 6 00:21:07.641287 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:21:07.641326 systemd[1]: Reached target user-config.target. Sep 6 00:21:07.651331 jq[1189]: true Sep 6 00:21:07.654476 extend-filesystems[1173]: Found loop1 Sep 6 00:21:07.663930 extend-filesystems[1173]: Found vda Sep 6 00:21:07.666351 extend-filesystems[1173]: Found vda1 Sep 6 00:21:07.668234 extend-filesystems[1173]: Found vda2 Sep 6 00:21:07.668990 extend-filesystems[1173]: Found vda3 Sep 6 00:21:07.673119 extend-filesystems[1173]: Found usr Sep 6 00:21:07.673736 extend-filesystems[1173]: Found vda4 Sep 6 00:21:07.674154 extend-filesystems[1173]: Found vda6 Sep 6 00:21:07.674936 extend-filesystems[1173]: Found vda7 Sep 6 00:21:07.683249 extend-filesystems[1173]: Found vda9 Sep 6 00:21:07.685006 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:21:07.685251 systemd[1]: Finished motdgen.service. Sep 6 00:21:07.686159 extend-filesystems[1173]: Checking size of /dev/vda9 Sep 6 00:21:07.725725 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:07.725753 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:21:07.750868 update_engine[1182]: I0906 00:21:07.749265 1182 main.cc:92] Flatcar Update Engine starting Sep 6 00:21:07.752751 extend-filesystems[1173]: Resized partition /dev/vda9 Sep 6 00:21:07.754462 systemd[1]: Started update-engine.service. Sep 6 00:21:07.760481 update_engine[1182]: I0906 00:21:07.754549 1182 update_check_scheduler.cc:74] Next update check in 10m16s Sep 6 00:21:07.757010 systemd[1]: Started locksmithd.service. Sep 6 00:21:07.764353 extend-filesystems[1223]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:21:07.770167 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 6 00:21:07.813305 bash[1222]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:21:07.814987 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:21:07.857850 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 00:21:07.878963 systemd-logind[1179]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:21:07.878997 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:21:07.880818 extend-filesystems[1223]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:21:07.880818 extend-filesystems[1223]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 00:21:07.880818 extend-filesystems[1223]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 00:21:07.893459 extend-filesystems[1173]: Resized filesystem in /dev/vda9 Sep 6 00:21:07.893459 extend-filesystems[1173]: Found vdb Sep 6 00:21:07.883021 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:21:07.901514 env[1191]: time="2025-09-06T00:21:07.895496265Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:21:07.883311 systemd[1]: Finished extend-filesystems.service. Sep 6 00:21:07.883471 systemd-logind[1179]: New seat seat0. Sep 6 00:21:07.890046 systemd[1]: Started systemd-logind.service. Sep 6 00:21:07.997022 coreos-metadata[1168]: Sep 06 00:21:07.996 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:21:08.003982 systemd-networkd[1007]: eth0: Gained IPv6LL Sep 6 00:21:08.016628 coreos-metadata[1168]: Sep 06 00:21:08.014 INFO Fetch successful Sep 6 00:21:08.027275 unknown[1168]: wrote ssh authorized keys file for user: core Sep 6 00:21:08.051270 update-ssh-keys[1229]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:21:08.051835 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:21:08.061477 env[1191]: time="2025-09-06T00:21:08.061369657Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:21:08.061654 env[1191]: time="2025-09-06T00:21:08.061623333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069016958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069115029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069485149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069522190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069544298Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069560373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.069684108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070061 env[1191]: time="2025-09-06T00:21:08.070062973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070522 env[1191]: time="2025-09-06T00:21:08.070298795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:21:08.070522 env[1191]: time="2025-09-06T00:21:08.070325400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:21:08.070522 env[1191]: time="2025-09-06T00:21:08.070402717Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:21:08.070522 env[1191]: time="2025-09-06T00:21:08.070421236Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078664517Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078731484Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078755967Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078836585Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078858926Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078932969Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078955846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078975790Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.078993985Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.079018157Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.079036276Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.079244 env[1191]: time="2025-09-06T00:21:08.079060859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:21:08.080074 env[1191]: time="2025-09-06T00:21:08.079296576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:21:08.080074 env[1191]: time="2025-09-06T00:21:08.079446283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:21:08.080074 env[1191]: time="2025-09-06T00:21:08.079939845Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:21:08.080074 env[1191]: time="2025-09-06T00:21:08.079999274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080074 env[1191]: time="2025-09-06T00:21:08.080026318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080126006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080149904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080281109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080305384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080323116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080339587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080359132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080377749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080461 env[1191]: time="2025-09-06T00:21:08.080399548Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080584538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080611595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080630648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080646189Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080667175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080682510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080724944Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:21:08.080811 env[1191]: time="2025-09-06T00:21:08.080777208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:21:08.082907 systemd[1]: Started containerd.service. Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.081036208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.081157542Z" level=info msg="Connect containerd service" Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.081207584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.082124719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.082596106Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.082677413Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:21:08.084787 env[1191]: time="2025-09-06T00:21:08.083832306Z" level=info msg="containerd successfully booted in 0.227792s" Sep 6 00:21:08.090197 env[1191]: time="2025-09-06T00:21:08.090083729Z" level=info msg="Start subscribing containerd event" Sep 6 00:21:08.091025 env[1191]: time="2025-09-06T00:21:08.090375875Z" level=info msg="Start recovering state" Sep 6 00:21:08.091025 env[1191]: time="2025-09-06T00:21:08.090510538Z" level=info msg="Start event monitor" Sep 6 00:21:08.092953 env[1191]: time="2025-09-06T00:21:08.092893658Z" level=info msg="Start snapshots syncer" Sep 6 00:21:08.092953 env[1191]: time="2025-09-06T00:21:08.092955887Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:21:08.093181 env[1191]: time="2025-09-06T00:21:08.092984619Z" level=info msg="Start streaming server" Sep 6 00:21:08.587248 locksmithd[1224]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:21:09.049658 tar[1186]: linux-amd64/README.md Sep 6 00:21:09.064144 systemd[1]: Finished prepare-helm.service. Sep 6 00:21:09.143229 sshd_keygen[1199]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:21:09.181258 systemd[1]: Finished sshd-keygen.service. Sep 6 00:21:09.184368 systemd[1]: Starting issuegen.service... Sep 6 00:21:09.196390 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:21:09.196624 systemd[1]: Finished issuegen.service. Sep 6 00:21:09.199735 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:21:09.210084 systemd[1]: Created slice system-sshd.slice. Sep 6 00:21:09.212258 systemd[1]: Started sshd@0-143.198.64.97:22-147.75.109.163:50932.service. Sep 6 00:21:09.215691 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:21:09.219776 systemd[1]: Started getty@tty1.service. Sep 6 00:21:09.224852 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:21:09.227518 systemd[1]: Reached target getty.target. Sep 6 00:21:09.341356 sshd[1251]: Accepted publickey for core from 147.75.109.163 port 50932 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:09.347593 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:09.360834 systemd[1]: Created slice user-500.slice. Sep 6 00:21:09.363843 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:21:09.370312 systemd-logind[1179]: New session 1 of user core. Sep 6 00:21:09.378739 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:21:09.382488 systemd[1]: Starting user@500.service... Sep 6 00:21:09.389602 (systemd)[1256]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:09.414529 systemd[1]: Started kubelet.service. Sep 6 00:21:09.415631 systemd[1]: Reached target multi-user.target. Sep 6 00:21:09.418378 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:21:09.431054 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:21:09.431261 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:21:09.503078 systemd[1256]: Queued start job for default target default.target. Sep 6 00:21:09.503685 systemd[1256]: Reached target paths.target. Sep 6 00:21:09.503707 systemd[1256]: Reached target sockets.target. Sep 6 00:21:09.503720 systemd[1256]: Reached target timers.target. Sep 6 00:21:09.503732 systemd[1256]: Reached target basic.target. Sep 6 00:21:09.503859 systemd[1]: Started user@500.service. Sep 6 00:21:09.505608 systemd[1]: Started session-1.scope. Sep 6 00:21:09.506083 systemd[1]: Startup finished in 902ms (kernel) + 5.444s (initrd) + 7.847s (userspace) = 14.193s. Sep 6 00:21:09.522620 systemd[1256]: Reached target default.target. Sep 6 00:21:09.522707 systemd[1256]: Startup finished in 122ms. Sep 6 00:21:09.583673 systemd[1]: Started sshd@1-143.198.64.97:22-147.75.109.163:50948.service. Sep 6 00:21:09.642564 sshd[1273]: Accepted publickey for core from 147.75.109.163 port 50948 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:09.644993 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:09.651641 systemd[1]: Started session-2.scope. Sep 6 00:21:09.653220 systemd-logind[1179]: New session 2 of user core. Sep 6 00:21:09.722729 sshd[1273]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:09.728712 systemd[1]: Started sshd@2-143.198.64.97:22-147.75.109.163:50958.service. Sep 6 00:21:09.731512 systemd[1]: sshd@1-143.198.64.97:22-147.75.109.163:50948.service: Deactivated successfully. Sep 6 00:21:09.732336 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:21:09.735400 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:21:09.738810 systemd-logind[1179]: Removed session 2. Sep 6 00:21:09.784640 sshd[1278]: Accepted publickey for core from 147.75.109.163 port 50958 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:09.787080 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:09.796904 systemd[1]: Started session-3.scope. Sep 6 00:21:09.797609 systemd-logind[1179]: New session 3 of user core. Sep 6 00:21:09.861152 sshd[1278]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:09.870560 systemd[1]: Started sshd@3-143.198.64.97:22-147.75.109.163:50968.service. Sep 6 00:21:09.874309 systemd[1]: sshd@2-143.198.64.97:22-147.75.109.163:50958.service: Deactivated successfully. Sep 6 00:21:09.875517 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:21:09.879661 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:21:09.881333 systemd-logind[1179]: Removed session 3. Sep 6 00:21:09.936966 sshd[1284]: Accepted publickey for core from 147.75.109.163 port 50968 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:09.937940 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:09.946363 systemd[1]: Started session-4.scope. Sep 6 00:21:09.946965 systemd-logind[1179]: New session 4 of user core. Sep 6 00:21:10.022897 sshd[1284]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:10.031748 systemd[1]: Started sshd@4-143.198.64.97:22-147.75.109.163:49494.service. Sep 6 00:21:10.036533 systemd[1]: sshd@3-143.198.64.97:22-147.75.109.163:50968.service: Deactivated successfully. Sep 6 00:21:10.037778 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:21:10.040331 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:21:10.042514 systemd-logind[1179]: Removed session 4. Sep 6 00:21:10.101180 sshd[1290]: Accepted publickey for core from 147.75.109.163 port 49494 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:21:10.103168 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:10.110237 systemd-logind[1179]: New session 5 of user core. Sep 6 00:21:10.110625 systemd[1]: Started session-5.scope. Sep 6 00:21:10.195404 sudo[1294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:21:10.196361 sudo[1294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:21:10.243888 systemd[1]: Starting docker.service... Sep 6 00:21:10.319180 env[1304]: time="2025-09-06T00:21:10.318440533Z" level=info msg="Starting up" Sep 6 00:21:10.324461 env[1304]: time="2025-09-06T00:21:10.324407426Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:21:10.324664 env[1304]: time="2025-09-06T00:21:10.324642323Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:21:10.324767 env[1304]: time="2025-09-06T00:21:10.324748569Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:21:10.324838 env[1304]: time="2025-09-06T00:21:10.324820136Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:21:10.330856 env[1304]: time="2025-09-06T00:21:10.330808645Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:21:10.330856 env[1304]: time="2025-09-06T00:21:10.330846428Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:21:10.331019 env[1304]: time="2025-09-06T00:21:10.330886089Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:21:10.331019 env[1304]: time="2025-09-06T00:21:10.330904822Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:21:10.344438 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1541055469-merged.mount: Deactivated successfully. Sep 6 00:21:10.370496 kubelet[1262]: E0906 00:21:10.370378 1262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:21:10.373198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:21:10.373341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:10.373622 systemd[1]: kubelet.service: Consumed 1.417s CPU time. Sep 6 00:21:10.390177 env[1304]: time="2025-09-06T00:21:10.390025188Z" level=info msg="Loading containers: start." Sep 6 00:21:10.563134 kernel: Initializing XFRM netlink socket Sep 6 00:21:10.606619 env[1304]: time="2025-09-06T00:21:10.606521357Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:21:10.696172 systemd-networkd[1007]: docker0: Link UP Sep 6 00:21:10.716713 env[1304]: time="2025-09-06T00:21:10.716671424Z" level=info msg="Loading containers: done." Sep 6 00:21:10.733879 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2747246766-merged.mount: Deactivated successfully. Sep 6 00:21:10.737824 env[1304]: time="2025-09-06T00:21:10.737774744Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:21:10.738454 env[1304]: time="2025-09-06T00:21:10.738412647Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:21:10.738834 env[1304]: time="2025-09-06T00:21:10.738803186Z" level=info msg="Daemon has completed initialization" Sep 6 00:21:10.757038 systemd[1]: Started docker.service. Sep 6 00:21:10.764343 env[1304]: time="2025-09-06T00:21:10.764223761Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:21:10.790304 systemd[1]: Starting coreos-metadata.service... Sep 6 00:21:10.845893 coreos-metadata[1421]: Sep 06 00:21:10.845 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:21:10.859897 coreos-metadata[1421]: Sep 06 00:21:10.859 INFO Fetch successful Sep 6 00:21:10.877757 systemd[1]: Finished coreos-metadata.service. Sep 6 00:21:11.785366 env[1191]: time="2025-09-06T00:21:11.785314158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 00:21:12.326692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668665783.mount: Deactivated successfully. Sep 6 00:21:14.348900 env[1191]: time="2025-09-06T00:21:14.348798287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.351196 env[1191]: time="2025-09-06T00:21:14.351137794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.353598 env[1191]: time="2025-09-06T00:21:14.353536968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.356054 env[1191]: time="2025-09-06T00:21:14.355989042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:14.357300 env[1191]: time="2025-09-06T00:21:14.357247628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 6 00:21:14.358024 env[1191]: time="2025-09-06T00:21:14.357983489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 00:21:16.186387 env[1191]: time="2025-09-06T00:21:16.186304384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:16.188048 env[1191]: time="2025-09-06T00:21:16.188001103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:16.190469 env[1191]: time="2025-09-06T00:21:16.190430195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:16.193035 env[1191]: time="2025-09-06T00:21:16.192992311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:16.194035 env[1191]: time="2025-09-06T00:21:16.193986934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 6 00:21:16.195619 env[1191]: time="2025-09-06T00:21:16.195568003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 00:21:18.083093 env[1191]: time="2025-09-06T00:21:18.083017949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:18.085068 env[1191]: time="2025-09-06T00:21:18.085004514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:18.087406 env[1191]: time="2025-09-06T00:21:18.087358113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:18.089775 env[1191]: time="2025-09-06T00:21:18.089716338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:18.091489 env[1191]: time="2025-09-06T00:21:18.091412969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 6 00:21:18.092571 env[1191]: time="2025-09-06T00:21:18.092513424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 00:21:19.496297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139433920.mount: Deactivated successfully. Sep 6 00:21:20.510007 env[1191]: time="2025-09-06T00:21:20.509932792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:20.512435 env[1191]: time="2025-09-06T00:21:20.512062148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:20.513701 env[1191]: time="2025-09-06T00:21:20.513645731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:20.515137 env[1191]: time="2025-09-06T00:21:20.515072018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:20.515803 env[1191]: time="2025-09-06T00:21:20.515755113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 6 00:21:20.516545 env[1191]: time="2025-09-06T00:21:20.516509196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 00:21:20.597802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:21:20.598117 systemd[1]: Stopped kubelet.service. Sep 6 00:21:20.598191 systemd[1]: kubelet.service: Consumed 1.417s CPU time. Sep 6 00:21:20.600871 systemd[1]: Starting kubelet.service... Sep 6 00:21:20.724327 systemd[1]: Started kubelet.service. Sep 6 00:21:20.804495 kubelet[1445]: E0906 00:21:20.803751 1445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:21:20.815795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:21:20.816002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:20.938483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653907818.mount: Deactivated successfully. Sep 6 00:21:22.192557 env[1191]: time="2025-09-06T00:21:22.192489453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.194235 env[1191]: time="2025-09-06T00:21:22.194189532Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.196245 env[1191]: time="2025-09-06T00:21:22.196206430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.198725 env[1191]: time="2025-09-06T00:21:22.198671080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.199562 env[1191]: time="2025-09-06T00:21:22.199517291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 6 00:21:22.200369 env[1191]: time="2025-09-06T00:21:22.200339335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:21:22.644921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854056762.mount: Deactivated successfully. Sep 6 00:21:22.649669 env[1191]: time="2025-09-06T00:21:22.649597074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.651404 env[1191]: time="2025-09-06T00:21:22.651333807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.652827 env[1191]: time="2025-09-06T00:21:22.652786270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.654195 env[1191]: time="2025-09-06T00:21:22.654159314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:22.654817 env[1191]: time="2025-09-06T00:21:22.654782153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:21:22.655458 env[1191]: time="2025-09-06T00:21:22.655430503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 00:21:23.198304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171533793.mount: Deactivated successfully. Sep 6 00:21:25.684476 env[1191]: time="2025-09-06T00:21:25.684413588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.686579 env[1191]: time="2025-09-06T00:21:25.686529275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.689492 env[1191]: time="2025-09-06T00:21:25.689439542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.696044 env[1191]: time="2025-09-06T00:21:25.695980073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:25.699284 env[1191]: time="2025-09-06T00:21:25.697790347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 6 00:21:30.847384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:21:30.847637 systemd[1]: Stopped kubelet.service. Sep 6 00:21:30.851329 systemd[1]: Starting kubelet.service... Sep 6 00:21:31.028910 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:21:31.029016 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:21:31.029611 systemd[1]: Stopped kubelet.service. Sep 6 00:21:31.035427 systemd[1]: Starting kubelet.service... Sep 6 00:21:31.080499 systemd[1]: Reloading. Sep 6 00:21:31.224306 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-09-06T00:21:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:31.224336 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-09-06T00:21:31Z" level=info msg="torcx already run" Sep 6 00:21:31.348748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:31.349006 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:31.371870 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:31.506901 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 00:21:31.507273 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 00:21:31.507726 systemd[1]: Stopped kubelet.service. Sep 6 00:21:31.511336 systemd[1]: Starting kubelet.service... Sep 6 00:21:31.682525 systemd[1]: Started kubelet.service. Sep 6 00:21:31.759716 kubelet[1553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:31.759716 kubelet[1553]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:31.759716 kubelet[1553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:31.759716 kubelet[1553]: I0906 00:21:31.759503 1553 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:32.389244 kubelet[1553]: I0906 00:21:32.389196 1553 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:21:32.389443 kubelet[1553]: I0906 00:21:32.389429 1553 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:32.389762 kubelet[1553]: I0906 00:21:32.389746 1553 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:21:32.431356 kubelet[1553]: E0906 00:21:32.431312 1553 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.64.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:21:32.432686 kubelet[1553]: I0906 00:21:32.432644 1553 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:32.445652 kubelet[1553]: E0906 00:21:32.445581 1553 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:32.445891 kubelet[1553]: I0906 00:21:32.445870 1553 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:32.450394 kubelet[1553]: I0906 00:21:32.450346 1553 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:32.450884 kubelet[1553]: I0906 00:21:32.450832 1553 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:32.451216 kubelet[1553]: I0906 00:21:32.450988 1553 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-f7f83b6e50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:21:32.451431 kubelet[1553]: I0906 00:21:32.451416 1553 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:32.451558 kubelet[1553]: I0906 00:21:32.451545 1553 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:21:32.451811 kubelet[1553]: I0906 00:21:32.451795 1553 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:32.454366 kubelet[1553]: I0906 00:21:32.454326 1553 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:21:32.454704 kubelet[1553]: I0906 00:21:32.454684 1553 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:32.454831 kubelet[1553]: I0906 00:21:32.454819 1553 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:21:32.471306 kubelet[1553]: E0906 00:21:32.471260 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.64.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f7f83b6e50&limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:21:32.471505 kubelet[1553]: I0906 00:21:32.471489 1553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:32.475334 kubelet[1553]: E0906 00:21:32.475247 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.64.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:21:32.481718 kubelet[1553]: I0906 00:21:32.481653 1553 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:32.482942 kubelet[1553]: I0906 00:21:32.482903 1553 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:21:32.484086 kubelet[1553]: W0906 00:21:32.484050 1553 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:21:32.489846 kubelet[1553]: I0906 00:21:32.489810 1553 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:21:32.490417 kubelet[1553]: I0906 00:21:32.490392 1553 server.go:1289] "Started kubelet" Sep 6 00:21:32.494607 kubelet[1553]: I0906 00:21:32.494536 1553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:32.496453 kubelet[1553]: I0906 00:21:32.496414 1553 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:32.496783 kubelet[1553]: I0906 00:21:32.496726 1553 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:32.497278 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:21:32.497430 kubelet[1553]: I0906 00:21:32.497410 1553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:32.499270 kubelet[1553]: I0906 00:21:32.499196 1553 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:21:32.505522 kubelet[1553]: E0906 00:21:32.503781 1553 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.64.97:6443/api/v1/namespaces/default/events\": dial tcp 143.198.64.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-f7f83b6e50.186289987347fa89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-f7f83b6e50,UID:ci-3510.3.8-n-f7f83b6e50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-f7f83b6e50,},FirstTimestamp:2025-09-06 00:21:32.490046089 +0000 UTC m=+0.799584448,LastTimestamp:2025-09-06 00:21:32.490046089 +0000 UTC m=+0.799584448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-f7f83b6e50,}" Sep 6 00:21:32.506500 kubelet[1553]: I0906 00:21:32.505761 1553 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:32.511517 kubelet[1553]: I0906 00:21:32.511481 1553 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:21:32.511717 kubelet[1553]: E0906 00:21:32.511695 1553 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" Sep 6 00:21:32.512087 kubelet[1553]: I0906 00:21:32.512050 1553 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:21:32.512248 kubelet[1553]: I0906 00:21:32.512136 1553 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:32.513019 kubelet[1553]: I0906 00:21:32.512968 1553 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:32.513590 kubelet[1553]: E0906 00:21:32.513544 1553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.64.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f7f83b6e50?timeout=10s\": dial tcp 143.198.64.97:6443: connect: connection refused" interval="200ms" Sep 6 00:21:32.514974 kubelet[1553]: I0906 00:21:32.514939 1553 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:21:32.514974 kubelet[1553]: I0906 00:21:32.514964 1553 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:21:32.516418 kubelet[1553]: E0906 00:21:32.516368 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.64.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:21:32.516831 kubelet[1553]: E0906 00:21:32.516784 1553 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:32.533444 kubelet[1553]: I0906 00:21:32.533408 1553 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:21:32.533715 kubelet[1553]: I0906 00:21:32.533692 1553 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:32.533868 kubelet[1553]: I0906 00:21:32.533853 1553 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:32.535998 kubelet[1553]: I0906 00:21:32.535962 1553 policy_none.go:49] "None policy: Start" Sep 6 00:21:32.536306 kubelet[1553]: I0906 00:21:32.536281 1553 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:21:32.536597 kubelet[1553]: I0906 00:21:32.536578 1553 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:32.543636 systemd[1]: Created slice kubepods.slice. Sep 6 00:21:32.553887 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:21:32.558037 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:21:32.560763 kubelet[1553]: I0906 00:21:32.560650 1553 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:32.562762 kubelet[1553]: I0906 00:21:32.562721 1553 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:32.563008 kubelet[1553]: I0906 00:21:32.562986 1553 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:21:32.563170 kubelet[1553]: I0906 00:21:32.563149 1553 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:21:32.563281 kubelet[1553]: I0906 00:21:32.563265 1553 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:21:32.563459 kubelet[1553]: E0906 00:21:32.563430 1553 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:32.566012 kubelet[1553]: E0906 00:21:32.565978 1553 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:21:32.569522 kubelet[1553]: I0906 00:21:32.569470 1553 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:32.569522 kubelet[1553]: I0906 00:21:32.569499 1553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:32.569749 kubelet[1553]: E0906 00:21:32.568946 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.64.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:21:32.569888 kubelet[1553]: I0906 00:21:32.569869 1553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:32.574556 kubelet[1553]: E0906 00:21:32.574517 1553 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:21:32.574890 kubelet[1553]: E0906 00:21:32.574863 1553 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-f7f83b6e50\" not found" Sep 6 00:21:32.671665 kubelet[1553]: I0906 00:21:32.671539 1553 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.674452 kubelet[1553]: E0906 00:21:32.674406 1553 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.64.97:6443/api/v1/nodes\": dial tcp 143.198.64.97:6443: connect: connection refused" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.679505 systemd[1]: Created slice kubepods-burstable-pod3c4bbf1a23e6769e651612552600ed6b.slice. Sep 6 00:21:32.690623 kubelet[1553]: E0906 00:21:32.690586 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.693814 systemd[1]: Created slice kubepods-burstable-podf2ed86433b300279dbc89abdaf673726.slice. Sep 6 00:21:32.698094 kubelet[1553]: E0906 00:21:32.698055 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.699761 systemd[1]: Created slice kubepods-burstable-podc231cea0af17c399b9a1601cbc2f038f.slice. Sep 6 00:21:32.701872 kubelet[1553]: E0906 00:21:32.701824 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.714379 kubelet[1553]: E0906 00:21:32.714328 1553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.64.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f7f83b6e50?timeout=10s\": dial tcp 143.198.64.97:6443: connect: connection refused" interval="400ms" Sep 6 00:21:32.812939 kubelet[1553]: I0906 00:21:32.812865 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.813691 kubelet[1553]: I0906 00:21:32.813662 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.813836 kubelet[1553]: I0906 00:21:32.813815 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814053 kubelet[1553]: I0906 00:21:32.813989 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814116 kubelet[1553]: I0906 00:21:32.814076 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814161 kubelet[1553]: I0906 00:21:32.814132 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814191 kubelet[1553]: I0906 00:21:32.814161 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814227 kubelet[1553]: I0906 00:21:32.814189 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c231cea0af17c399b9a1601cbc2f038f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-f7f83b6e50\" (UID: \"c231cea0af17c399b9a1601cbc2f038f\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.814227 kubelet[1553]: I0906 00:21:32.814214 1553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.877462 kubelet[1553]: I0906 00:21:32.877408 1553 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.877902 kubelet[1553]: E0906 00:21:32.877858 1553 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.64.97:6443/api/v1/nodes\": dial tcp 143.198.64.97:6443: connect: connection refused" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:32.991690 kubelet[1553]: E0906 00:21:32.991535 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:32.993582 env[1191]: time="2025-09-06T00:21:32.993500107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-f7f83b6e50,Uid:3c4bbf1a23e6769e651612552600ed6b,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:33.000773 kubelet[1553]: E0906 00:21:33.000718 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:33.001824 env[1191]: time="2025-09-06T00:21:33.001778681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-f7f83b6e50,Uid:f2ed86433b300279dbc89abdaf673726,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:33.004054 kubelet[1553]: E0906 00:21:33.004005 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:33.004801 env[1191]: time="2025-09-06T00:21:33.004760989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-f7f83b6e50,Uid:c231cea0af17c399b9a1601cbc2f038f,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:33.115042 kubelet[1553]: E0906 00:21:33.114940 1553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.64.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f7f83b6e50?timeout=10s\": dial tcp 143.198.64.97:6443: connect: connection refused" interval="800ms" Sep 6 00:21:33.279688 kubelet[1553]: I0906 00:21:33.279179 1553 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:33.280119 kubelet[1553]: E0906 00:21:33.280033 1553 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.64.97:6443/api/v1/nodes\": dial tcp 143.198.64.97:6443: connect: connection refused" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:33.316664 kubelet[1553]: E0906 00:21:33.316600 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.64.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:21:33.508836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194290980.mount: Deactivated successfully. Sep 6 00:21:33.512768 env[1191]: time="2025-09-06T00:21:33.512715122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.513843 env[1191]: time="2025-09-06T00:21:33.513802599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.516374 env[1191]: time="2025-09-06T00:21:33.516330341Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.517260 env[1191]: time="2025-09-06T00:21:33.517215561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.518879 env[1191]: time="2025-09-06T00:21:33.518844834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.527329 env[1191]: time="2025-09-06T00:21:33.527283148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.532378 env[1191]: time="2025-09-06T00:21:33.531319602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.532378 env[1191]: time="2025-09-06T00:21:33.531974395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.532910 env[1191]: time="2025-09-06T00:21:33.532850863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.533811 env[1191]: time="2025-09-06T00:21:33.533773224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.534582 env[1191]: time="2025-09-06T00:21:33.534547415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.535238 env[1191]: time="2025-09-06T00:21:33.535209350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:33.563758 env[1191]: time="2025-09-06T00:21:33.563657011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:33.563977 env[1191]: time="2025-09-06T00:21:33.563719177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:33.563977 env[1191]: time="2025-09-06T00:21:33.563730659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:33.564132 env[1191]: time="2025-09-06T00:21:33.564043402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bce886030690266119cad7b0a74fb8e55596a5fe078306e77c61f4346e16eb6e pid=1604 runtime=io.containerd.runc.v2 Sep 6 00:21:33.571465 env[1191]: time="2025-09-06T00:21:33.571226628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:33.571465 env[1191]: time="2025-09-06T00:21:33.571266559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:33.571465 env[1191]: time="2025-09-06T00:21:33.571278880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:33.572700 env[1191]: time="2025-09-06T00:21:33.571827627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2028e6109f0fd3bc46f104ac3fb1dd49fcbd17565c2e9fbc82b69b23d1d2750 pid=1605 runtime=io.containerd.runc.v2 Sep 6 00:21:33.578837 env[1191]: time="2025-09-06T00:21:33.578720677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:33.579091 env[1191]: time="2025-09-06T00:21:33.579048633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:33.579271 env[1191]: time="2025-09-06T00:21:33.579225659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:33.579598 env[1191]: time="2025-09-06T00:21:33.579556638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/613be197ea67b371c021ce0b7670424ac95f27fb3d7ab4de857cdcaefc4be7d1 pid=1627 runtime=io.containerd.runc.v2 Sep 6 00:21:33.601891 systemd[1]: Started cri-containerd-bce886030690266119cad7b0a74fb8e55596a5fe078306e77c61f4346e16eb6e.scope. Sep 6 00:21:33.627558 systemd[1]: Started cri-containerd-a2028e6109f0fd3bc46f104ac3fb1dd49fcbd17565c2e9fbc82b69b23d1d2750.scope. Sep 6 00:21:33.652691 systemd[1]: Started cri-containerd-613be197ea67b371c021ce0b7670424ac95f27fb3d7ab4de857cdcaefc4be7d1.scope. Sep 6 00:21:33.699464 env[1191]: time="2025-09-06T00:21:33.699418002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-f7f83b6e50,Uid:f2ed86433b300279dbc89abdaf673726,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2028e6109f0fd3bc46f104ac3fb1dd49fcbd17565c2e9fbc82b69b23d1d2750\"" Sep 6 00:21:33.703806 kubelet[1553]: E0906 00:21:33.703517 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:33.709834 env[1191]: time="2025-09-06T00:21:33.707064567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-f7f83b6e50,Uid:c231cea0af17c399b9a1601cbc2f038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bce886030690266119cad7b0a74fb8e55596a5fe078306e77c61f4346e16eb6e\"" Sep 6 00:21:33.709834 env[1191]: time="2025-09-06T00:21:33.708938300Z" level=info msg="CreateContainer within sandbox \"a2028e6109f0fd3bc46f104ac3fb1dd49fcbd17565c2e9fbc82b69b23d1d2750\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:21:33.710066 kubelet[1553]: E0906 00:21:33.707852 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:33.711934 env[1191]: time="2025-09-06T00:21:33.711888576Z" level=info msg="CreateContainer within sandbox \"bce886030690266119cad7b0a74fb8e55596a5fe078306e77c61f4346e16eb6e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:21:33.722013 env[1191]: time="2025-09-06T00:21:33.721955062Z" level=info msg="CreateContainer within sandbox \"a2028e6109f0fd3bc46f104ac3fb1dd49fcbd17565c2e9fbc82b69b23d1d2750\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4d207f0293dd4cb9bd949f9ae92b36d29e748f61528dcc127dbdbf3f7ddac13\"" Sep 6 00:21:33.722958 env[1191]: time="2025-09-06T00:21:33.722915604Z" level=info msg="StartContainer for \"a4d207f0293dd4cb9bd949f9ae92b36d29e748f61528dcc127dbdbf3f7ddac13\"" Sep 6 00:21:33.739605 env[1191]: time="2025-09-06T00:21:33.739537876Z" level=info msg="CreateContainer within sandbox \"bce886030690266119cad7b0a74fb8e55596a5fe078306e77c61f4346e16eb6e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bdc6dee52d7ed9520119a2b9111c286f0cd28e8e7eb66f37eb7f36204f7c452b\"" Sep 6 00:21:33.740366 env[1191]: time="2025-09-06T00:21:33.740331958Z" level=info msg="StartContainer for \"bdc6dee52d7ed9520119a2b9111c286f0cd28e8e7eb66f37eb7f36204f7c452b\"" Sep 6 00:21:33.758768 env[1191]: time="2025-09-06T00:21:33.758720646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-f7f83b6e50,Uid:3c4bbf1a23e6769e651612552600ed6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"613be197ea67b371c021ce0b7670424ac95f27fb3d7ab4de857cdcaefc4be7d1\"" Sep 6 00:21:33.760172 kubelet[1553]: E0906 00:21:33.760068 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:33.763731 env[1191]: time="2025-09-06T00:21:33.763690631Z" level=info msg="CreateContainer within sandbox \"613be197ea67b371c021ce0b7670424ac95f27fb3d7ab4de857cdcaefc4be7d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:21:33.773483 systemd[1]: Started cri-containerd-a4d207f0293dd4cb9bd949f9ae92b36d29e748f61528dcc127dbdbf3f7ddac13.scope. Sep 6 00:21:33.778801 kubelet[1553]: E0906 00:21:33.778758 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.64.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:21:33.780975 env[1191]: time="2025-09-06T00:21:33.780913586Z" level=info msg="CreateContainer within sandbox \"613be197ea67b371c021ce0b7670424ac95f27fb3d7ab4de857cdcaefc4be7d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cdf972cbccec1f225758278838cfd72f46334e725015d0e2aed3d0278fbd977d\"" Sep 6 00:21:33.782081 env[1191]: time="2025-09-06T00:21:33.782045253Z" level=info msg="StartContainer for \"cdf972cbccec1f225758278838cfd72f46334e725015d0e2aed3d0278fbd977d\"" Sep 6 00:21:33.790434 systemd[1]: Started cri-containerd-bdc6dee52d7ed9520119a2b9111c286f0cd28e8e7eb66f37eb7f36204f7c452b.scope. Sep 6 00:21:33.819658 systemd[1]: Started cri-containerd-cdf972cbccec1f225758278838cfd72f46334e725015d0e2aed3d0278fbd977d.scope. Sep 6 00:21:33.856077 env[1191]: time="2025-09-06T00:21:33.856025463Z" level=info msg="StartContainer for \"a4d207f0293dd4cb9bd949f9ae92b36d29e748f61528dcc127dbdbf3f7ddac13\" returns successfully" Sep 6 00:21:33.905500 env[1191]: time="2025-09-06T00:21:33.905449638Z" level=info msg="StartContainer for \"bdc6dee52d7ed9520119a2b9111c286f0cd28e8e7eb66f37eb7f36204f7c452b\" returns successfully" Sep 6 00:21:33.916374 kubelet[1553]: E0906 00:21:33.916321 1553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.64.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f7f83b6e50?timeout=10s\": dial tcp 143.198.64.97:6443: connect: connection refused" interval="1.6s" Sep 6 00:21:33.923028 env[1191]: time="2025-09-06T00:21:33.922978394Z" level=info msg="StartContainer for \"cdf972cbccec1f225758278838cfd72f46334e725015d0e2aed3d0278fbd977d\" returns successfully" Sep 6 00:21:33.984914 kubelet[1553]: E0906 00:21:33.984865 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.64.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:21:34.003638 kubelet[1553]: E0906 00:21:34.003591 1553 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.64.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f7f83b6e50&limit=500&resourceVersion=0\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:21:34.081523 kubelet[1553]: I0906 00:21:34.081403 1553 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:34.082007 kubelet[1553]: E0906 00:21:34.081735 1553 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.64.97:6443/api/v1/nodes\": dial tcp 143.198.64.97:6443: connect: connection refused" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:34.551954 kubelet[1553]: E0906 00:21:34.551904 1553 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.64.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.64.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:21:34.572393 kubelet[1553]: E0906 00:21:34.572354 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:34.572573 kubelet[1553]: E0906 00:21:34.572485 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:34.574680 kubelet[1553]: E0906 00:21:34.574630 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:34.574847 kubelet[1553]: E0906 00:21:34.574827 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:34.576592 kubelet[1553]: E0906 00:21:34.576559 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:34.576749 kubelet[1553]: E0906 00:21:34.576720 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:35.578721 kubelet[1553]: E0906 00:21:35.578679 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:35.579254 kubelet[1553]: E0906 00:21:35.579209 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:35.588459 kubelet[1553]: E0906 00:21:35.588402 1553 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:35.588678 kubelet[1553]: E0906 00:21:35.588631 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:35.683376 kubelet[1553]: I0906 00:21:35.683339 1553 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:36.980333 kubelet[1553]: E0906 00:21:36.980297 1553 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-f7f83b6e50\" not found" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.013478 kubelet[1553]: I0906 00:21:37.013431 1553 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.013478 kubelet[1553]: E0906 00:21:37.013477 1553 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-f7f83b6e50\": node \"ci-3510.3.8-n-f7f83b6e50\" not found" Sep 6 00:21:37.036038 kubelet[1553]: I0906 00:21:37.035994 1553 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.083325 kubelet[1553]: E0906 00:21:37.083286 1553 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.083720 kubelet[1553]: E0906 00:21:37.083698 1553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:37.113403 kubelet[1553]: I0906 00:21:37.113345 1553 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.117384 kubelet[1553]: E0906 00:21:37.117326 1553 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.117740 kubelet[1553]: I0906 00:21:37.117706 1553 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.120729 kubelet[1553]: E0906 00:21:37.120681 1553 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-f7f83b6e50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.121026 kubelet[1553]: I0906 00:21:37.121002 1553 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.123981 kubelet[1553]: E0906 00:21:37.123938 1553 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:37.478786 kubelet[1553]: I0906 00:21:37.478730 1553 apiserver.go:52] "Watching apiserver" Sep 6 00:21:37.513416 kubelet[1553]: I0906 00:21:37.513317 1553 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:21:39.670512 systemd[1]: Reloading. Sep 6 00:21:39.794279 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-09-06T00:21:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:39.794917 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-09-06T00:21:39Z" level=info msg="torcx already run" Sep 6 00:21:39.905446 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:39.905469 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:39.933725 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:40.074449 systemd[1]: Stopping kubelet.service... Sep 6 00:21:40.092916 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:21:40.093507 systemd[1]: Stopped kubelet.service. Sep 6 00:21:40.093723 systemd[1]: kubelet.service: Consumed 1.236s CPU time. Sep 6 00:21:40.097309 systemd[1]: Starting kubelet.service... Sep 6 00:21:41.191687 systemd[1]: Started kubelet.service. Sep 6 00:21:41.299969 kubelet[1902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:41.300589 kubelet[1902]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:41.300685 kubelet[1902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:41.301634 kubelet[1902]: I0906 00:21:41.301559 1902 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:41.323412 kubelet[1902]: I0906 00:21:41.323337 1902 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:21:41.323412 kubelet[1902]: I0906 00:21:41.323395 1902 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:41.323946 kubelet[1902]: I0906 00:21:41.323895 1902 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:21:41.328288 kubelet[1902]: I0906 00:21:41.326627 1902 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 00:21:41.334567 kubelet[1902]: I0906 00:21:41.334487 1902 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:41.341090 kubelet[1902]: E0906 00:21:41.341042 1902 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:41.341090 kubelet[1902]: I0906 00:21:41.341082 1902 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:41.344227 sudo[1917]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:21:41.344899 sudo[1917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:21:41.347456 kubelet[1902]: I0906 00:21:41.347425 1902 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:41.347732 kubelet[1902]: I0906 00:21:41.347698 1902 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:41.347901 kubelet[1902]: I0906 00:21:41.347730 1902 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-f7f83b6e50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:21:41.348023 kubelet[1902]: I0906 00:21:41.347910 1902 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:41.348023 kubelet[1902]: I0906 00:21:41.347922 1902 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:21:41.348023 kubelet[1902]: I0906 00:21:41.347973 1902 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:41.348169 kubelet[1902]: I0906 00:21:41.348153 1902 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:21:41.348233 kubelet[1902]: I0906 00:21:41.348178 1902 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:41.350495 kubelet[1902]: I0906 00:21:41.349577 1902 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:21:41.351170 kubelet[1902]: I0906 00:21:41.351143 1902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:41.365520 kubelet[1902]: I0906 00:21:41.365481 1902 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:41.366352 kubelet[1902]: I0906 00:21:41.366320 1902 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:21:41.387919 kubelet[1902]: I0906 00:21:41.387879 1902 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:21:41.388399 kubelet[1902]: I0906 00:21:41.388304 1902 server.go:1289] "Started kubelet" Sep 6 00:21:41.400675 kubelet[1902]: I0906 00:21:41.400645 1902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:41.404894 kubelet[1902]: E0906 00:21:41.404858 1902 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:41.410475 kubelet[1902]: I0906 00:21:41.410444 1902 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:21:41.414741 kubelet[1902]: I0906 00:21:41.414684 1902 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:41.419210 kubelet[1902]: I0906 00:21:41.419179 1902 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:21:41.423792 kubelet[1902]: I0906 00:21:41.423726 1902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:41.424397 kubelet[1902]: I0906 00:21:41.424199 1902 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:41.425501 kubelet[1902]: I0906 00:21:41.425479 1902 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:41.431296 kubelet[1902]: I0906 00:21:41.431254 1902 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:21:41.432896 kubelet[1902]: I0906 00:21:41.432835 1902 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:41.433920 kubelet[1902]: I0906 00:21:41.431513 1902 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:21:41.434186 kubelet[1902]: I0906 00:21:41.431647 1902 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:41.435927 kubelet[1902]: I0906 00:21:41.435901 1902 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:21:41.454021 kubelet[1902]: I0906 00:21:41.448264 1902 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:41.454021 kubelet[1902]: I0906 00:21:41.449739 1902 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:41.454021 kubelet[1902]: I0906 00:21:41.449768 1902 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:21:41.454021 kubelet[1902]: I0906 00:21:41.449795 1902 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:21:41.454021 kubelet[1902]: I0906 00:21:41.449803 1902 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:21:41.454021 kubelet[1902]: E0906 00:21:41.449856 1902 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:41.537094 kubelet[1902]: I0906 00:21:41.537051 1902 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:21:41.537094 kubelet[1902]: I0906 00:21:41.537081 1902 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537140 1902 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537415 1902 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537438 1902 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537457 1902 policy_none.go:49] "None policy: Start" Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537469 1902 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:21:41.537490 kubelet[1902]: I0906 00:21:41.537480 1902 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:41.537812 kubelet[1902]: I0906 00:21:41.537586 1902 state_mem.go:75] "Updated machine memory state" Sep 6 00:21:41.541987 kubelet[1902]: E0906 00:21:41.541940 1902 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:21:41.542438 kubelet[1902]: I0906 00:21:41.542408 1902 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:41.542575 kubelet[1902]: I0906 00:21:41.542436 1902 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:41.542857 kubelet[1902]: I0906 00:21:41.542839 1902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:41.544797 kubelet[1902]: E0906 00:21:41.544760 1902 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:21:41.559357 kubelet[1902]: I0906 00:21:41.559323 1902 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.561877 kubelet[1902]: I0906 00:21:41.561826 1902 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.565704 kubelet[1902]: I0906 00:21:41.565669 1902 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.572723 kubelet[1902]: I0906 00:21:41.572688 1902 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 00:21:41.573032 kubelet[1902]: I0906 00:21:41.573004 1902 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 00:21:41.573469 kubelet[1902]: I0906 00:21:41.573440 1902 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 00:21:41.635241 kubelet[1902]: I0906 00:21:41.635191 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635241 kubelet[1902]: I0906 00:21:41.635240 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635445 kubelet[1902]: I0906 00:21:41.635272 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635445 kubelet[1902]: I0906 00:21:41.635294 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635445 kubelet[1902]: I0906 00:21:41.635313 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635445 kubelet[1902]: I0906 00:21:41.635328 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635445 kubelet[1902]: I0906 00:21:41.635344 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c231cea0af17c399b9a1601cbc2f038f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-f7f83b6e50\" (UID: \"c231cea0af17c399b9a1601cbc2f038f\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635587 kubelet[1902]: I0906 00:21:41.635359 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c4bbf1a23e6769e651612552600ed6b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" (UID: \"3c4bbf1a23e6769e651612552600ed6b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.635587 kubelet[1902]: I0906 00:21:41.635376 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2ed86433b300279dbc89abdaf673726-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f7f83b6e50\" (UID: \"f2ed86433b300279dbc89abdaf673726\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.658139 kubelet[1902]: I0906 00:21:41.658080 1902 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.671257 kubelet[1902]: I0906 00:21:41.671217 1902 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.671413 kubelet[1902]: I0906 00:21:41.671307 1902 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:41.874213 kubelet[1902]: E0906 00:21:41.874159 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.874630 kubelet[1902]: E0906 00:21:41.874595 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:41.876511 kubelet[1902]: E0906 00:21:41.874899 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.133509 sudo[1917]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:42.354455 kubelet[1902]: I0906 00:21:42.354391 1902 apiserver.go:52] "Watching apiserver" Sep 6 00:21:42.435249 kubelet[1902]: I0906 00:21:42.435089 1902 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:21:42.507569 kubelet[1902]: E0906 00:21:42.507529 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.508634 kubelet[1902]: E0906 00:21:42.508605 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.509002 kubelet[1902]: I0906 00:21:42.508985 1902 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:42.534287 kubelet[1902]: I0906 00:21:42.534235 1902 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 00:21:42.534471 kubelet[1902]: E0906 00:21:42.534311 1902 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-f7f83b6e50\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" Sep 6 00:21:42.534561 kubelet[1902]: E0906 00:21:42.534543 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:42.557847 kubelet[1902]: I0906 00:21:42.557771 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-f7f83b6e50" podStartSLOduration=1.557752362 podStartE2EDuration="1.557752362s" podCreationTimestamp="2025-09-06 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.541710387 +0000 UTC m=+1.333959623" watchObservedRunningTime="2025-09-06 00:21:42.557752362 +0000 UTC m=+1.350001613" Sep 6 00:21:42.571881 kubelet[1902]: I0906 00:21:42.571812 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f7f83b6e50" podStartSLOduration=1.5717087379999999 podStartE2EDuration="1.571708738s" podCreationTimestamp="2025-09-06 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.571641348 +0000 UTC m=+1.363890593" watchObservedRunningTime="2025-09-06 00:21:42.571708738 +0000 UTC m=+1.363957961" Sep 6 00:21:42.572124 kubelet[1902]: I0906 00:21:42.571994 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f7f83b6e50" podStartSLOduration=1.5719850210000001 podStartE2EDuration="1.571985021s" podCreationTimestamp="2025-09-06 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:42.558896703 +0000 UTC m=+1.351145948" watchObservedRunningTime="2025-09-06 00:21:42.571985021 +0000 UTC m=+1.364234269" Sep 6 00:21:43.510189 kubelet[1902]: E0906 00:21:43.510147 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:43.511390 kubelet[1902]: E0906 00:21:43.511360 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:43.983233 sudo[1294]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:43.989421 sshd[1290]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:43.993624 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:21:43.994916 systemd[1]: sshd@4-143.198.64.97:22-147.75.109.163:49494.service: Deactivated successfully. Sep 6 00:21:43.995982 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:21:43.996544 systemd[1]: session-5.scope: Consumed 7.512s CPU time. Sep 6 00:21:43.997213 systemd-logind[1179]: Removed session 5. Sep 6 00:21:44.615017 kubelet[1902]: E0906 00:21:44.614974 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:45.494403 kubelet[1902]: E0906 00:21:45.494350 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:45.512455 kubelet[1902]: E0906 00:21:45.512410 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:45.512689 kubelet[1902]: E0906 00:21:45.512671 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:45.782512 kubelet[1902]: I0906 00:21:45.782373 1902 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:21:45.783467 env[1191]: time="2025-09-06T00:21:45.783415489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:21:45.784687 kubelet[1902]: I0906 00:21:45.784471 1902 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:21:46.877287 systemd[1]: Created slice kubepods-besteffort-poddb7e6be6_5a04_4e84_af43_29cf84b36b2a.slice. Sep 6 00:21:46.893445 kubelet[1902]: E0906 00:21:46.893395 1902 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Sep 6 00:21:46.893954 kubelet[1902]: E0906 00:21:46.893893 1902 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Sep 6 00:21:46.894002 kubelet[1902]: E0906 00:21:46.893959 1902 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Sep 6 00:21:46.897581 systemd[1]: Created slice kubepods-burstable-poddbaaf578_ccb9_45a0_9904_24b93546cfa1.slice. Sep 6 00:21:46.968841 kubelet[1902]: I0906 00:21:46.968780 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db7e6be6-5a04-4e84-af43-29cf84b36b2a-kube-proxy\") pod \"kube-proxy-z7gq7\" (UID: \"db7e6be6-5a04-4e84-af43-29cf84b36b2a\") " pod="kube-system/kube-proxy-z7gq7" Sep 6 00:21:46.968841 kubelet[1902]: I0906 00:21:46.968842 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db7e6be6-5a04-4e84-af43-29cf84b36b2a-xtables-lock\") pod \"kube-proxy-z7gq7\" (UID: \"db7e6be6-5a04-4e84-af43-29cf84b36b2a\") " pod="kube-system/kube-proxy-z7gq7" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968872 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-net\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968898 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968933 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hostproc\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968954 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cni-path\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968975 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-etc-cni-netd\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969172 kubelet[1902]: I0906 00:21:46.968995 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbaaf578-ccb9-45a0-9904-24b93546cfa1-clustermesh-secrets\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969017 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-config-path\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969042 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-run\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969071 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-cgroup\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969091 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-xtables-lock\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969143 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-kernel\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969418 kubelet[1902]: I0906 00:21:46.969169 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db7e6be6-5a04-4e84-af43-29cf84b36b2a-lib-modules\") pod \"kube-proxy-z7gq7\" (UID: \"db7e6be6-5a04-4e84-af43-29cf84b36b2a\") " pod="kube-system/kube-proxy-z7gq7" Sep 6 00:21:46.969643 kubelet[1902]: I0906 00:21:46.969194 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92664\" (UniqueName: \"kubernetes.io/projected/db7e6be6-5a04-4e84-af43-29cf84b36b2a-kube-api-access-92664\") pod \"kube-proxy-z7gq7\" (UID: \"db7e6be6-5a04-4e84-af43-29cf84b36b2a\") " pod="kube-system/kube-proxy-z7gq7" Sep 6 00:21:46.969643 kubelet[1902]: I0906 00:21:46.969219 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-bpf-maps\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969643 kubelet[1902]: I0906 00:21:46.969245 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-lib-modules\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.969643 kubelet[1902]: I0906 00:21:46.969271 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478g8\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-kube-api-access-478g8\") pod \"cilium-sjkwz\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " pod="kube-system/cilium-sjkwz" Sep 6 00:21:46.996438 kubelet[1902]: I0906 00:21:46.996361 1902 status_manager.go:895] "Failed to get status for pod" podUID="e0ff582c-4f11-4d6f-9a62-8296af15feec" pod="kube-system/cilium-operator-6c4d7847fc-vgg85" err="pods \"cilium-operator-6c4d7847fc-vgg85\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" Sep 6 00:21:46.997611 systemd[1]: Created slice kubepods-besteffort-pode0ff582c_4f11_4d6f_9a62_8296af15feec.slice. Sep 6 00:21:47.069853 kubelet[1902]: I0906 00:21:47.069795 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0ff582c-4f11-4d6f-9a62-8296af15feec-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vgg85\" (UID: \"e0ff582c-4f11-4d6f-9a62-8296af15feec\") " pod="kube-system/cilium-operator-6c4d7847fc-vgg85" Sep 6 00:21:47.070040 kubelet[1902]: I0906 00:21:47.069916 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlr9z\" (UniqueName: \"kubernetes.io/projected/e0ff582c-4f11-4d6f-9a62-8296af15feec-kube-api-access-qlr9z\") pod \"cilium-operator-6c4d7847fc-vgg85\" (UID: \"e0ff582c-4f11-4d6f-9a62-8296af15feec\") " pod="kube-system/cilium-operator-6c4d7847fc-vgg85" Sep 6 00:21:47.080318 kubelet[1902]: I0906 00:21:47.080269 1902 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:21:47.184537 kubelet[1902]: E0906 00:21:47.184393 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:47.187072 env[1191]: time="2025-09-06T00:21:47.186618972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7gq7,Uid:db7e6be6-5a04-4e84-af43-29cf84b36b2a,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:47.207654 env[1191]: time="2025-09-06T00:21:47.207535554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:47.207654 env[1191]: time="2025-09-06T00:21:47.207607159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:47.207654 env[1191]: time="2025-09-06T00:21:47.207618699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:47.208382 env[1191]: time="2025-09-06T00:21:47.208299115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5618923e235cd2ae2e9106543d09e1c3bad8a5fcacd7a3a31d0328d7ecdc015e pid=1986 runtime=io.containerd.runc.v2 Sep 6 00:21:47.229300 systemd[1]: Started cri-containerd-5618923e235cd2ae2e9106543d09e1c3bad8a5fcacd7a3a31d0328d7ecdc015e.scope. Sep 6 00:21:47.262017 env[1191]: time="2025-09-06T00:21:47.261969578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7gq7,Uid:db7e6be6-5a04-4e84-af43-29cf84b36b2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5618923e235cd2ae2e9106543d09e1c3bad8a5fcacd7a3a31d0328d7ecdc015e\"" Sep 6 00:21:47.263133 kubelet[1902]: E0906 00:21:47.263002 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:47.268511 env[1191]: time="2025-09-06T00:21:47.268161931Z" level=info msg="CreateContainer within sandbox \"5618923e235cd2ae2e9106543d09e1c3bad8a5fcacd7a3a31d0328d7ecdc015e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:21:47.281135 env[1191]: time="2025-09-06T00:21:47.281066367Z" level=info msg="CreateContainer within sandbox \"5618923e235cd2ae2e9106543d09e1c3bad8a5fcacd7a3a31d0328d7ecdc015e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d94ff6f0625bad66ccbb46915fe293a1c4ece3276b6b213fbb681e68c2640ea0\"" Sep 6 00:21:47.283477 env[1191]: time="2025-09-06T00:21:47.283431033Z" level=info msg="StartContainer for \"d94ff6f0625bad66ccbb46915fe293a1c4ece3276b6b213fbb681e68c2640ea0\"" Sep 6 00:21:47.305766 systemd[1]: Started cri-containerd-d94ff6f0625bad66ccbb46915fe293a1c4ece3276b6b213fbb681e68c2640ea0.scope. Sep 6 00:21:47.351266 env[1191]: time="2025-09-06T00:21:47.351168621Z" level=info msg="StartContainer for \"d94ff6f0625bad66ccbb46915fe293a1c4ece3276b6b213fbb681e68c2640ea0\" returns successfully" Sep 6 00:21:47.518162 kubelet[1902]: E0906 00:21:47.517997 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:47.556792 kubelet[1902]: I0906 00:21:47.556712 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z7gq7" podStartSLOduration=1.556682177 podStartE2EDuration="1.556682177s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:47.537133361 +0000 UTC m=+6.329382610" watchObservedRunningTime="2025-09-06 00:21:47.556682177 +0000 UTC m=+6.348931420" Sep 6 00:21:47.850182 kubelet[1902]: E0906 00:21:47.850124 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:47.901398 kubelet[1902]: E0906 00:21:47.901329 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:47.902293 env[1191]: time="2025-09-06T00:21:47.902239836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vgg85,Uid:e0ff582c-4f11-4d6f-9a62-8296af15feec,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:47.919479 env[1191]: time="2025-09-06T00:21:47.919352104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:47.919479 env[1191]: time="2025-09-06T00:21:47.919482446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:47.919696 env[1191]: time="2025-09-06T00:21:47.919522371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:47.919772 env[1191]: time="2025-09-06T00:21:47.919733620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95 pid=2134 runtime=io.containerd.runc.v2 Sep 6 00:21:47.944494 systemd[1]: Started cri-containerd-c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95.scope. Sep 6 00:21:48.016313 env[1191]: time="2025-09-06T00:21:48.016250453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vgg85,Uid:e0ff582c-4f11-4d6f-9a62-8296af15feec,Namespace:kube-system,Attempt:0,} returns sandbox id \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\"" Sep 6 00:21:48.018065 kubelet[1902]: E0906 00:21:48.017749 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:48.019506 env[1191]: time="2025-09-06T00:21:48.019444347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:21:48.074767 kubelet[1902]: E0906 00:21:48.074378 1902 projected.go:264] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:21:48.074767 kubelet[1902]: E0906 00:21:48.074415 1902 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-sjkwz: failed to sync secret cache: timed out waiting for the condition Sep 6 00:21:48.074767 kubelet[1902]: E0906 00:21:48.074508 1902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls podName:dbaaf578-ccb9-45a0-9904-24b93546cfa1 nodeName:}" failed. No retries permitted until 2025-09-06 00:21:48.574486748 +0000 UTC m=+7.366735985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls") pod "cilium-sjkwz" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:21:48.521651 kubelet[1902]: E0906 00:21:48.521620 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:48.701903 kubelet[1902]: E0906 00:21:48.701835 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:48.703389 env[1191]: time="2025-09-06T00:21:48.702924246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjkwz,Uid:dbaaf578-ccb9-45a0-9904-24b93546cfa1,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:48.724199 env[1191]: time="2025-09-06T00:21:48.724059917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:48.724199 env[1191]: time="2025-09-06T00:21:48.724150341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:48.724199 env[1191]: time="2025-09-06T00:21:48.724163084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:48.724954 env[1191]: time="2025-09-06T00:21:48.724849390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f pid=2235 runtime=io.containerd.runc.v2 Sep 6 00:21:48.746522 systemd[1]: Started cri-containerd-086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f.scope. Sep 6 00:21:48.790857 env[1191]: time="2025-09-06T00:21:48.790736639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sjkwz,Uid:dbaaf578-ccb9-45a0-9904-24b93546cfa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\"" Sep 6 00:21:48.791998 kubelet[1902]: E0906 00:21:48.791963 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:49.577146 kubelet[1902]: E0906 00:21:49.576515 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:50.235262 env[1191]: time="2025-09-06T00:21:50.235196224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.237618 env[1191]: time="2025-09-06T00:21:50.237569916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.239040 env[1191]: time="2025-09-06T00:21:50.238999731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:50.239754 env[1191]: time="2025-09-06T00:21:50.239719329Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:21:50.243958 env[1191]: time="2025-09-06T00:21:50.243903625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:21:50.248830 env[1191]: time="2025-09-06T00:21:50.248756297Z" level=info msg="CreateContainer within sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:21:50.267852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891251901.mount: Deactivated successfully. Sep 6 00:21:50.272221 env[1191]: time="2025-09-06T00:21:50.272154855Z" level=info msg="CreateContainer within sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\"" Sep 6 00:21:50.274343 env[1191]: time="2025-09-06T00:21:50.274288898Z" level=info msg="StartContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\"" Sep 6 00:21:50.295965 systemd[1]: Started cri-containerd-6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9.scope. Sep 6 00:21:50.335594 env[1191]: time="2025-09-06T00:21:50.335520614Z" level=info msg="StartContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" returns successfully" Sep 6 00:21:50.583555 kubelet[1902]: E0906 00:21:50.582347 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:51.584532 kubelet[1902]: E0906 00:21:51.583959 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:21:53.317930 update_engine[1182]: I0906 00:21:53.317353 1182 update_attempter.cc:509] Updating boot flags... Sep 6 00:21:56.331663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94617466.mount: Deactivated successfully. Sep 6 00:21:59.728856 env[1191]: time="2025-09-06T00:21:59.728792137Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:59.731003 env[1191]: time="2025-09-06T00:21:59.730960009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:59.733306 env[1191]: time="2025-09-06T00:21:59.733263329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:59.734319 env[1191]: time="2025-09-06T00:21:59.734273369Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:21:59.744219 env[1191]: time="2025-09-06T00:21:59.744154281Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:21:59.761460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924320025.mount: Deactivated successfully. Sep 6 00:21:59.770349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057691855.mount: Deactivated successfully. Sep 6 00:21:59.774327 env[1191]: time="2025-09-06T00:21:59.774264707Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\"" Sep 6 00:21:59.776288 env[1191]: time="2025-09-06T00:21:59.775321058Z" level=info msg="StartContainer for \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\"" Sep 6 00:21:59.815247 systemd[1]: Started cri-containerd-1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010.scope. Sep 6 00:21:59.864727 env[1191]: time="2025-09-06T00:21:59.864650674Z" level=info msg="StartContainer for \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\" returns successfully" Sep 6 00:21:59.875047 systemd[1]: cri-containerd-1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010.scope: Deactivated successfully. Sep 6 00:21:59.938489 env[1191]: time="2025-09-06T00:21:59.938430829Z" level=info msg="shim disconnected" id=1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010 Sep 6 00:21:59.938818 env[1191]: time="2025-09-06T00:21:59.938790562Z" level=warning msg="cleaning up after shim disconnected" id=1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010 namespace=k8s.io Sep 6 00:21:59.938929 env[1191]: time="2025-09-06T00:21:59.938907932Z" level=info msg="cleaning up dead shim" Sep 6 00:21:59.949405 env[1191]: time="2025-09-06T00:21:59.949334647Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2373 runtime=io.containerd.runc.v2\n" Sep 6 00:22:00.657579 kubelet[1902]: E0906 00:22:00.657528 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:00.665468 env[1191]: time="2025-09-06T00:22:00.665415915Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:22:00.679733 env[1191]: time="2025-09-06T00:22:00.679683287Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\"" Sep 6 00:22:00.682032 env[1191]: time="2025-09-06T00:22:00.681984467Z" level=info msg="StartContainer for \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\"" Sep 6 00:22:00.709922 systemd[1]: Started cri-containerd-296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf.scope. Sep 6 00:22:00.726996 kubelet[1902]: I0906 00:22:00.726904 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vgg85" podStartSLOduration=12.503900188 podStartE2EDuration="14.726877264s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="2025-09-06 00:21:48.018899187 +0000 UTC m=+6.811148425" lastFinishedPulling="2025-09-06 00:21:50.241876279 +0000 UTC m=+9.034125501" observedRunningTime="2025-09-06 00:21:50.632870868 +0000 UTC m=+9.425120110" watchObservedRunningTime="2025-09-06 00:22:00.726877264 +0000 UTC m=+19.519126510" Sep 6 00:22:00.759143 env[1191]: time="2025-09-06T00:22:00.752255065Z" level=info msg="StartContainer for \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\" returns successfully" Sep 6 00:22:00.756581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010-rootfs.mount: Deactivated successfully. Sep 6 00:22:00.775385 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:22:00.775620 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:22:00.776350 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:22:00.778863 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:22:00.795883 systemd[1]: cri-containerd-296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf.scope: Deactivated successfully. Sep 6 00:22:00.807653 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:22:00.831540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf-rootfs.mount: Deactivated successfully. Sep 6 00:22:00.841708 env[1191]: time="2025-09-06T00:22:00.841564029Z" level=info msg="shim disconnected" id=296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf Sep 6 00:22:00.842007 env[1191]: time="2025-09-06T00:22:00.841980338Z" level=warning msg="cleaning up after shim disconnected" id=296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf namespace=k8s.io Sep 6 00:22:00.842110 env[1191]: time="2025-09-06T00:22:00.842083493Z" level=info msg="cleaning up dead shim" Sep 6 00:22:00.854263 env[1191]: time="2025-09-06T00:22:00.854212715Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Sep 6 00:22:01.663186 kubelet[1902]: E0906 00:22:01.663139 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:01.678221 env[1191]: time="2025-09-06T00:22:01.678024072Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:22:01.705223 env[1191]: time="2025-09-06T00:22:01.705141550Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\"" Sep 6 00:22:01.707910 env[1191]: time="2025-09-06T00:22:01.706379597Z" level=info msg="StartContainer for \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\"" Sep 6 00:22:01.736308 systemd[1]: Started cri-containerd-77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7.scope. Sep 6 00:22:01.759297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192819834.mount: Deactivated successfully. Sep 6 00:22:01.802863 env[1191]: time="2025-09-06T00:22:01.802796579Z" level=info msg="StartContainer for \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\" returns successfully" Sep 6 00:22:01.814844 systemd[1]: cri-containerd-77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7.scope: Deactivated successfully. Sep 6 00:22:01.850616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7-rootfs.mount: Deactivated successfully. Sep 6 00:22:01.866236 env[1191]: time="2025-09-06T00:22:01.866171172Z" level=info msg="shim disconnected" id=77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7 Sep 6 00:22:01.866681 env[1191]: time="2025-09-06T00:22:01.866651518Z" level=warning msg="cleaning up after shim disconnected" id=77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7 namespace=k8s.io Sep 6 00:22:01.866824 env[1191]: time="2025-09-06T00:22:01.866803541Z" level=info msg="cleaning up dead shim" Sep 6 00:22:01.880061 env[1191]: time="2025-09-06T00:22:01.879993934Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Sep 6 00:22:02.666574 kubelet[1902]: E0906 00:22:02.666523 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:02.674433 env[1191]: time="2025-09-06T00:22:02.674378002Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:22:02.707487 env[1191]: time="2025-09-06T00:22:02.707432934Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\"" Sep 6 00:22:02.709123 env[1191]: time="2025-09-06T00:22:02.709066853Z" level=info msg="StartContainer for \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\"" Sep 6 00:22:02.733194 systemd[1]: Started cri-containerd-6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960.scope. Sep 6 00:22:02.784597 systemd[1]: cri-containerd-6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960.scope: Deactivated successfully. Sep 6 00:22:02.786782 env[1191]: time="2025-09-06T00:22:02.786588368Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbaaf578_ccb9_45a0_9904_24b93546cfa1.slice/cri-containerd-6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960.scope/memory.events\": no such file or directory" Sep 6 00:22:02.797778 env[1191]: time="2025-09-06T00:22:02.797621061Z" level=info msg="StartContainer for \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\" returns successfully" Sep 6 00:22:02.823340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960-rootfs.mount: Deactivated successfully. Sep 6 00:22:02.827529 env[1191]: time="2025-09-06T00:22:02.827472295Z" level=info msg="shim disconnected" id=6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960 Sep 6 00:22:02.828155 env[1191]: time="2025-09-06T00:22:02.828116817Z" level=warning msg="cleaning up after shim disconnected" id=6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960 namespace=k8s.io Sep 6 00:22:02.828281 env[1191]: time="2025-09-06T00:22:02.828261950Z" level=info msg="cleaning up dead shim" Sep 6 00:22:02.839719 env[1191]: time="2025-09-06T00:22:02.839658587Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2552 runtime=io.containerd.runc.v2\n" Sep 6 00:22:03.672759 kubelet[1902]: E0906 00:22:03.672714 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:03.693972 env[1191]: time="2025-09-06T00:22:03.693913571Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:22:03.718271 env[1191]: time="2025-09-06T00:22:03.718198202Z" level=info msg="CreateContainer within sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\"" Sep 6 00:22:03.719890 env[1191]: time="2025-09-06T00:22:03.719839131Z" level=info msg="StartContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\"" Sep 6 00:22:03.764461 systemd[1]: Started cri-containerd-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc.scope. Sep 6 00:22:03.815204 env[1191]: time="2025-09-06T00:22:03.815128185Z" level=info msg="StartContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" returns successfully" Sep 6 00:22:03.846740 systemd[1]: run-containerd-runc-k8s.io-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc-runc.Mu9tJt.mount: Deactivated successfully. Sep 6 00:22:04.005454 kubelet[1902]: I0906 00:22:04.004163 1902 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:22:04.060349 systemd[1]: Created slice kubepods-burstable-pode3d5dfce_412c_4085_9f22_71b0816fd608.slice. Sep 6 00:22:04.068079 systemd[1]: Created slice kubepods-burstable-podd4da6383_b7d2_4b27_8f4b_4f7f476df13a.slice. Sep 6 00:22:04.069274 kubelet[1902]: E0906 00:22:04.069219 1902 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Sep 6 00:22:04.069701 kubelet[1902]: I0906 00:22:04.069626 1902 status_manager.go:895] "Failed to get status for pod" podUID="e3d5dfce-412c-4085-9f22-71b0816fd608" pod="kube-system/coredns-674b8bbfcf-rww2q" err="pods \"coredns-674b8bbfcf-rww2q\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" Sep 6 00:22:04.073166 kubelet[1902]: I0906 00:22:04.073092 1902 status_manager.go:895] "Failed to get status for pod" podUID="e3d5dfce-412c-4085-9f22-71b0816fd608" pod="kube-system/coredns-674b8bbfcf-rww2q" err="pods \"coredns-674b8bbfcf-rww2q\" is forbidden: User \"system:node:ci-3510.3.8-n-f7f83b6e50\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f7f83b6e50' and this object" Sep 6 00:22:04.198133 kubelet[1902]: I0906 00:22:04.198046 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4da6383-b7d2-4b27-8f4b-4f7f476df13a-config-volume\") pod \"coredns-674b8bbfcf-hq4gh\" (UID: \"d4da6383-b7d2-4b27-8f4b-4f7f476df13a\") " pod="kube-system/coredns-674b8bbfcf-hq4gh" Sep 6 00:22:04.198514 kubelet[1902]: I0906 00:22:04.198462 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3d5dfce-412c-4085-9f22-71b0816fd608-config-volume\") pod \"coredns-674b8bbfcf-rww2q\" (UID: \"e3d5dfce-412c-4085-9f22-71b0816fd608\") " pod="kube-system/coredns-674b8bbfcf-rww2q" Sep 6 00:22:04.198743 kubelet[1902]: I0906 00:22:04.198710 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbhw\" (UniqueName: \"kubernetes.io/projected/d4da6383-b7d2-4b27-8f4b-4f7f476df13a-kube-api-access-8hbhw\") pod \"coredns-674b8bbfcf-hq4gh\" (UID: \"d4da6383-b7d2-4b27-8f4b-4f7f476df13a\") " pod="kube-system/coredns-674b8bbfcf-hq4gh" Sep 6 00:22:04.199037 kubelet[1902]: I0906 00:22:04.198987 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd75l\" (UniqueName: \"kubernetes.io/projected/e3d5dfce-412c-4085-9f22-71b0816fd608-kube-api-access-gd75l\") pod \"coredns-674b8bbfcf-rww2q\" (UID: \"e3d5dfce-412c-4085-9f22-71b0816fd608\") " pod="kube-system/coredns-674b8bbfcf-rww2q" Sep 6 00:22:04.679923 kubelet[1902]: E0906 00:22:04.679880 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:04.706175 kubelet[1902]: I0906 00:22:04.706117 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sjkwz" podStartSLOduration=7.76404047 podStartE2EDuration="18.706089655s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="2025-09-06 00:21:48.793923088 +0000 UTC m=+7.586172311" lastFinishedPulling="2025-09-06 00:21:59.735972261 +0000 UTC m=+18.528221496" observedRunningTime="2025-09-06 00:22:04.705354588 +0000 UTC m=+23.497603833" watchObservedRunningTime="2025-09-06 00:22:04.706089655 +0000 UTC m=+23.498338899" Sep 6 00:22:05.265725 kubelet[1902]: E0906 00:22:05.265610 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:05.266895 env[1191]: time="2025-09-06T00:22:05.266823210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rww2q,Uid:e3d5dfce-412c-4085-9f22-71b0816fd608,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:05.273939 kubelet[1902]: E0906 00:22:05.273902 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:05.275111 env[1191]: time="2025-09-06T00:22:05.275054769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq4gh,Uid:d4da6383-b7d2-4b27-8f4b-4f7f476df13a,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:05.682609 kubelet[1902]: E0906 00:22:05.682575 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:06.177613 systemd-networkd[1007]: cilium_host: Link UP Sep 6 00:22:06.179334 systemd-networkd[1007]: cilium_net: Link UP Sep 6 00:22:06.181856 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:22:06.182557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:22:06.182778 systemd-networkd[1007]: cilium_net: Gained carrier Sep 6 00:22:06.183027 systemd-networkd[1007]: cilium_host: Gained carrier Sep 6 00:22:06.354171 systemd-networkd[1007]: cilium_vxlan: Link UP Sep 6 00:22:06.354179 systemd-networkd[1007]: cilium_vxlan: Gained carrier Sep 6 00:22:06.685081 kubelet[1902]: E0906 00:22:06.685045 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:06.784145 kernel: NET: Registered PF_ALG protocol family Sep 6 00:22:07.010406 systemd-networkd[1007]: cilium_net: Gained IPv6LL Sep 6 00:22:07.074368 systemd-networkd[1007]: cilium_host: Gained IPv6LL Sep 6 00:22:07.836266 systemd-networkd[1007]: lxc_health: Link UP Sep 6 00:22:07.845747 systemd-networkd[1007]: lxc_health: Gained carrier Sep 6 00:22:07.846214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:22:08.290424 systemd-networkd[1007]: cilium_vxlan: Gained IPv6LL Sep 6 00:22:08.336965 systemd-networkd[1007]: lxceb5a7be0b0eb: Link UP Sep 6 00:22:08.348132 kernel: eth0: renamed from tmpd681e Sep 6 00:22:08.354135 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb5a7be0b0eb: link becomes ready Sep 6 00:22:08.354366 systemd-networkd[1007]: lxceb5a7be0b0eb: Gained carrier Sep 6 00:22:08.361386 systemd-networkd[1007]: lxc2c1f637c4645: Link UP Sep 6 00:22:08.396141 kernel: eth0: renamed from tmp60260 Sep 6 00:22:08.415149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2c1f637c4645: link becomes ready Sep 6 00:22:08.412421 systemd-networkd[1007]: lxc2c1f637c4645: Gained carrier Sep 6 00:22:08.704220 kubelet[1902]: E0906 00:22:08.704172 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:09.378314 systemd-networkd[1007]: lxc_health: Gained IPv6LL Sep 6 00:22:09.691324 kubelet[1902]: E0906 00:22:09.691200 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:09.826866 systemd-networkd[1007]: lxceb5a7be0b0eb: Gained IPv6LL Sep 6 00:22:10.027021 systemd-networkd[1007]: lxc2c1f637c4645: Gained IPv6LL Sep 6 00:22:10.700565 kubelet[1902]: E0906 00:22:10.700511 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.304788 env[1191]: time="2025-09-06T00:22:14.304677501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:14.304788 env[1191]: time="2025-09-06T00:22:14.304726301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:14.305821 env[1191]: time="2025-09-06T00:22:14.304737879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:14.305821 env[1191]: time="2025-09-06T00:22:14.304883789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599 pid=3101 runtime=io.containerd.runc.v2 Sep 6 00:22:14.317400 env[1191]: time="2025-09-06T00:22:14.317207467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:14.317507 env[1191]: time="2025-09-06T00:22:14.317295657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:14.317507 env[1191]: time="2025-09-06T00:22:14.317315850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:14.329056 env[1191]: time="2025-09-06T00:22:14.320268114Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d681e1094bfcca02e5cdbe8986bcac8977390e262849616e032c0e205aa93ca0 pid=3111 runtime=io.containerd.runc.v2 Sep 6 00:22:14.350257 systemd[1]: Started cri-containerd-d681e1094bfcca02e5cdbe8986bcac8977390e262849616e032c0e205aa93ca0.scope. Sep 6 00:22:14.361187 systemd[1]: Started cri-containerd-60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599.scope. Sep 6 00:22:14.368741 systemd[1]: run-containerd-runc-k8s.io-60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599-runc.D2divc.mount: Deactivated successfully. Sep 6 00:22:14.443862 env[1191]: time="2025-09-06T00:22:14.443814494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rww2q,Uid:e3d5dfce-412c-4085-9f22-71b0816fd608,Namespace:kube-system,Attempt:0,} returns sandbox id \"d681e1094bfcca02e5cdbe8986bcac8977390e262849616e032c0e205aa93ca0\"" Sep 6 00:22:14.446230 kubelet[1902]: E0906 00:22:14.446008 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.456147 env[1191]: time="2025-09-06T00:22:14.456071552Z" level=info msg="CreateContainer within sandbox \"d681e1094bfcca02e5cdbe8986bcac8977390e262849616e032c0e205aa93ca0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:22:14.491865 env[1191]: time="2025-09-06T00:22:14.491591650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq4gh,Uid:d4da6383-b7d2-4b27-8f4b-4f7f476df13a,Namespace:kube-system,Attempt:0,} returns sandbox id \"60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599\"" Sep 6 00:22:14.492580 kubelet[1902]: E0906 00:22:14.492549 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.498649 env[1191]: time="2025-09-06T00:22:14.498584726Z" level=info msg="CreateContainer within sandbox \"60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:22:14.506838 env[1191]: time="2025-09-06T00:22:14.506761859Z" level=info msg="CreateContainer within sandbox \"d681e1094bfcca02e5cdbe8986bcac8977390e262849616e032c0e205aa93ca0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d6d180979aa9f913e19d2d0de4aff5c54ba4f01190a3435cc9fa4bbce76c289\"" Sep 6 00:22:14.507652 env[1191]: time="2025-09-06T00:22:14.507592575Z" level=info msg="StartContainer for \"3d6d180979aa9f913e19d2d0de4aff5c54ba4f01190a3435cc9fa4bbce76c289\"" Sep 6 00:22:14.531240 env[1191]: time="2025-09-06T00:22:14.531145968Z" level=info msg="CreateContainer within sandbox \"60260533064d6e1eca591f9c4014b078fa63f1cd89c1e5f2f96f9733e8e88599\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"230dc19ac6218b0ae53420fec062954611af2ddb03a24844cf72e77fcc20e3de\"" Sep 6 00:22:14.532435 env[1191]: time="2025-09-06T00:22:14.532359874Z" level=info msg="StartContainer for \"230dc19ac6218b0ae53420fec062954611af2ddb03a24844cf72e77fcc20e3de\"" Sep 6 00:22:14.563929 systemd[1]: Started cri-containerd-3d6d180979aa9f913e19d2d0de4aff5c54ba4f01190a3435cc9fa4bbce76c289.scope. Sep 6 00:22:14.576494 systemd[1]: Started cri-containerd-230dc19ac6218b0ae53420fec062954611af2ddb03a24844cf72e77fcc20e3de.scope. Sep 6 00:22:14.640746 env[1191]: time="2025-09-06T00:22:14.640685388Z" level=info msg="StartContainer for \"3d6d180979aa9f913e19d2d0de4aff5c54ba4f01190a3435cc9fa4bbce76c289\" returns successfully" Sep 6 00:22:14.650965 env[1191]: time="2025-09-06T00:22:14.650897195Z" level=info msg="StartContainer for \"230dc19ac6218b0ae53420fec062954611af2ddb03a24844cf72e77fcc20e3de\" returns successfully" Sep 6 00:22:14.706785 kubelet[1902]: E0906 00:22:14.706739 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.710910 kubelet[1902]: E0906 00:22:14.710827 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:14.730570 kubelet[1902]: I0906 00:22:14.730497 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hq4gh" podStartSLOduration=28.730477876 podStartE2EDuration="28.730477876s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:14.730175013 +0000 UTC m=+33.522424254" watchObservedRunningTime="2025-09-06 00:22:14.730477876 +0000 UTC m=+33.522727118" Sep 6 00:22:15.713348 kubelet[1902]: E0906 00:22:15.713298 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:15.718416 kubelet[1902]: E0906 00:22:15.718376 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:15.731844 kubelet[1902]: I0906 00:22:15.731772 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rww2q" podStartSLOduration=29.731749159 podStartE2EDuration="29.731749159s" podCreationTimestamp="2025-09-06 00:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:14.75103672 +0000 UTC m=+33.543285965" watchObservedRunningTime="2025-09-06 00:22:15.731749159 +0000 UTC m=+34.523998404" Sep 6 00:22:16.715434 kubelet[1902]: E0906 00:22:16.715389 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:16.715871 kubelet[1902]: E0906 00:22:16.715395 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:21.454565 systemd[1]: Started sshd@5-143.198.64.97:22-147.75.109.163:53444.service. Sep 6 00:22:21.520373 sshd[3265]: Accepted publickey for core from 147.75.109.163 port 53444 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:21.524491 sshd[3265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:21.533523 systemd[1]: Started session-6.scope. Sep 6 00:22:21.534979 systemd-logind[1179]: New session 6 of user core. Sep 6 00:22:21.787083 sshd[3265]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:21.792564 systemd[1]: sshd@5-143.198.64.97:22-147.75.109.163:53444.service: Deactivated successfully. Sep 6 00:22:21.793478 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:22:21.794811 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:22:21.795946 systemd-logind[1179]: Removed session 6. Sep 6 00:22:26.797271 systemd[1]: Started sshd@6-143.198.64.97:22-147.75.109.163:53452.service. Sep 6 00:22:26.858978 sshd[3278]: Accepted publickey for core from 147.75.109.163 port 53452 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:26.862220 sshd[3278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:26.869180 systemd-logind[1179]: New session 7 of user core. Sep 6 00:22:26.870308 systemd[1]: Started session-7.scope. Sep 6 00:22:27.049337 sshd[3278]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:27.053820 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:22:27.054164 systemd[1]: sshd@6-143.198.64.97:22-147.75.109.163:53452.service: Deactivated successfully. Sep 6 00:22:27.055206 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:22:27.057008 systemd-logind[1179]: Removed session 7. Sep 6 00:22:32.058022 systemd[1]: Started sshd@7-143.198.64.97:22-147.75.109.163:56952.service. Sep 6 00:22:32.117756 sshd[3290]: Accepted publickey for core from 147.75.109.163 port 56952 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:32.120533 sshd[3290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:32.127619 systemd-logind[1179]: New session 8 of user core. Sep 6 00:22:32.128116 systemd[1]: Started session-8.scope. Sep 6 00:22:32.286753 sshd[3290]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:32.290790 systemd-logind[1179]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:22:32.291088 systemd[1]: sshd@7-143.198.64.97:22-147.75.109.163:56952.service: Deactivated successfully. Sep 6 00:22:32.291916 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:22:32.293154 systemd-logind[1179]: Removed session 8. Sep 6 00:22:37.295907 systemd[1]: Started sshd@8-143.198.64.97:22-147.75.109.163:56960.service. Sep 6 00:22:37.353998 sshd[3302]: Accepted publickey for core from 147.75.109.163 port 56960 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:37.355469 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:37.362299 systemd[1]: Started session-9.scope. Sep 6 00:22:37.362865 systemd-logind[1179]: New session 9 of user core. Sep 6 00:22:37.514357 sshd[3302]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:37.518338 systemd-logind[1179]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:22:37.518988 systemd[1]: sshd@8-143.198.64.97:22-147.75.109.163:56960.service: Deactivated successfully. Sep 6 00:22:37.519837 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:22:37.521880 systemd-logind[1179]: Removed session 9. Sep 6 00:22:42.523604 systemd[1]: Started sshd@9-143.198.64.97:22-147.75.109.163:42904.service. Sep 6 00:22:42.585557 sshd[3316]: Accepted publickey for core from 147.75.109.163 port 42904 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:42.587827 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:42.594515 systemd[1]: Started session-10.scope. Sep 6 00:22:42.595083 systemd-logind[1179]: New session 10 of user core. Sep 6 00:22:42.738897 sshd[3316]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:42.746603 systemd[1]: Started sshd@10-143.198.64.97:22-147.75.109.163:42914.service. Sep 6 00:22:42.747767 systemd[1]: sshd@9-143.198.64.97:22-147.75.109.163:42904.service: Deactivated successfully. Sep 6 00:22:42.749054 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:22:42.750151 systemd-logind[1179]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:22:42.751051 systemd-logind[1179]: Removed session 10. Sep 6 00:22:42.803568 sshd[3328]: Accepted publickey for core from 147.75.109.163 port 42914 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:42.805744 sshd[3328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:42.812241 systemd-logind[1179]: New session 11 of user core. Sep 6 00:22:42.812988 systemd[1]: Started session-11.scope. Sep 6 00:22:43.028653 sshd[3328]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:43.029771 systemd[1]: Started sshd@11-143.198.64.97:22-147.75.109.163:42928.service. Sep 6 00:22:43.035736 systemd[1]: sshd@10-143.198.64.97:22-147.75.109.163:42914.service: Deactivated successfully. Sep 6 00:22:43.038664 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:22:43.040555 systemd-logind[1179]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:22:43.043877 systemd-logind[1179]: Removed session 11. Sep 6 00:22:43.095484 sshd[3338]: Accepted publickey for core from 147.75.109.163 port 42928 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:43.097757 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:43.106406 systemd-logind[1179]: New session 12 of user core. Sep 6 00:22:43.107876 systemd[1]: Started session-12.scope. Sep 6 00:22:43.260264 sshd[3338]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:43.264319 systemd[1]: sshd@11-143.198.64.97:22-147.75.109.163:42928.service: Deactivated successfully. Sep 6 00:22:43.265455 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:22:43.266408 systemd-logind[1179]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:22:43.267650 systemd-logind[1179]: Removed session 12. Sep 6 00:22:48.268139 systemd[1]: Started sshd@12-143.198.64.97:22-147.75.109.163:42938.service. Sep 6 00:22:48.323486 sshd[3356]: Accepted publickey for core from 147.75.109.163 port 42938 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:48.326075 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:48.332824 systemd[1]: Started session-13.scope. Sep 6 00:22:48.334931 systemd-logind[1179]: New session 13 of user core. Sep 6 00:22:48.468803 sshd[3356]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:48.472547 systemd[1]: sshd@12-143.198.64.97:22-147.75.109.163:42938.service: Deactivated successfully. Sep 6 00:22:48.473420 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:22:48.474738 systemd-logind[1179]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:22:48.476722 systemd-logind[1179]: Removed session 13. Sep 6 00:22:51.453350 kubelet[1902]: E0906 00:22:51.453310 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:22:53.477770 systemd[1]: Started sshd@13-143.198.64.97:22-147.75.109.163:37828.service. Sep 6 00:22:53.534503 sshd[3368]: Accepted publickey for core from 147.75.109.163 port 37828 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:53.536821 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:53.543194 systemd-logind[1179]: New session 14 of user core. Sep 6 00:22:53.544662 systemd[1]: Started session-14.scope. Sep 6 00:22:53.688315 sshd[3368]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:53.694361 systemd[1]: sshd@13-143.198.64.97:22-147.75.109.163:37828.service: Deactivated successfully. Sep 6 00:22:53.695771 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:22:53.697159 systemd-logind[1179]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:22:53.699218 systemd[1]: Started sshd@14-143.198.64.97:22-147.75.109.163:37832.service. Sep 6 00:22:53.702132 systemd-logind[1179]: Removed session 14. Sep 6 00:22:53.758454 sshd[3380]: Accepted publickey for core from 147.75.109.163 port 37832 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:53.760509 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:53.767507 systemd-logind[1179]: New session 15 of user core. Sep 6 00:22:53.768415 systemd[1]: Started session-15.scope. Sep 6 00:22:54.143822 sshd[3380]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:54.150857 systemd[1]: Started sshd@15-143.198.64.97:22-147.75.109.163:37846.service. Sep 6 00:22:54.161462 systemd[1]: sshd@14-143.198.64.97:22-147.75.109.163:37832.service: Deactivated successfully. Sep 6 00:22:54.162613 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:22:54.163553 systemd-logind[1179]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:22:54.165943 systemd-logind[1179]: Removed session 15. Sep 6 00:22:54.215545 sshd[3389]: Accepted publickey for core from 147.75.109.163 port 37846 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:54.217662 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:54.223948 systemd-logind[1179]: New session 16 of user core. Sep 6 00:22:54.224659 systemd[1]: Started session-16.scope. Sep 6 00:22:54.992201 sshd[3389]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:55.001816 systemd[1]: Started sshd@16-143.198.64.97:22-147.75.109.163:37848.service. Sep 6 00:22:55.011411 systemd[1]: sshd@15-143.198.64.97:22-147.75.109.163:37846.service: Deactivated successfully. Sep 6 00:22:55.012875 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:22:55.014356 systemd-logind[1179]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:22:55.018666 systemd-logind[1179]: Removed session 16. Sep 6 00:22:55.067943 sshd[3403]: Accepted publickey for core from 147.75.109.163 port 37848 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:55.070204 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:55.076965 systemd-logind[1179]: New session 17 of user core. Sep 6 00:22:55.078489 systemd[1]: Started session-17.scope. Sep 6 00:22:55.404124 sshd[3403]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:55.411020 systemd[1]: Started sshd@17-143.198.64.97:22-147.75.109.163:37862.service. Sep 6 00:22:55.411869 systemd[1]: sshd@16-143.198.64.97:22-147.75.109.163:37848.service: Deactivated successfully. Sep 6 00:22:55.416748 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:22:55.423830 systemd-logind[1179]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:22:55.426851 systemd-logind[1179]: Removed session 17. Sep 6 00:22:55.463765 sshd[3416]: Accepted publickey for core from 147.75.109.163 port 37862 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:22:55.466238 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:55.473384 systemd[1]: Started session-18.scope. Sep 6 00:22:55.474161 systemd-logind[1179]: New session 18 of user core. Sep 6 00:22:55.623063 sshd[3416]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:55.627637 systemd[1]: sshd@17-143.198.64.97:22-147.75.109.163:37862.service: Deactivated successfully. Sep 6 00:22:55.628610 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:22:55.629159 systemd-logind[1179]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:22:55.630313 systemd-logind[1179]: Removed session 18. Sep 6 00:23:00.632955 systemd[1]: Started sshd@18-143.198.64.97:22-147.75.109.163:46080.service. Sep 6 00:23:00.691966 sshd[3428]: Accepted publickey for core from 147.75.109.163 port 46080 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:00.693089 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:00.698841 systemd-logind[1179]: New session 19 of user core. Sep 6 00:23:00.699490 systemd[1]: Started session-19.scope. Sep 6 00:23:00.843946 sshd[3428]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:00.847463 systemd-logind[1179]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:23:00.847743 systemd[1]: sshd@18-143.198.64.97:22-147.75.109.163:46080.service: Deactivated successfully. Sep 6 00:23:00.848549 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:23:00.849541 systemd-logind[1179]: Removed session 19. Sep 6 00:23:05.853365 systemd[1]: Started sshd@19-143.198.64.97:22-147.75.109.163:46094.service. Sep 6 00:23:05.909958 sshd[3444]: Accepted publickey for core from 147.75.109.163 port 46094 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:05.912322 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:05.918412 systemd[1]: Started session-20.scope. Sep 6 00:23:05.919207 systemd-logind[1179]: New session 20 of user core. Sep 6 00:23:06.078637 sshd[3444]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:06.082659 systemd-logind[1179]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:23:06.083065 systemd[1]: sshd@19-143.198.64.97:22-147.75.109.163:46094.service: Deactivated successfully. Sep 6 00:23:06.084049 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:23:06.085043 systemd-logind[1179]: Removed session 20. Sep 6 00:23:11.087552 systemd[1]: Started sshd@20-143.198.64.97:22-147.75.109.163:48118.service. Sep 6 00:23:11.147809 sshd[3456]: Accepted publickey for core from 147.75.109.163 port 48118 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:11.149057 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:11.155262 systemd-logind[1179]: New session 21 of user core. Sep 6 00:23:11.155786 systemd[1]: Started session-21.scope. Sep 6 00:23:11.302385 sshd[3456]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:11.305730 systemd[1]: sshd@20-143.198.64.97:22-147.75.109.163:48118.service: Deactivated successfully. Sep 6 00:23:11.306893 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:23:11.307685 systemd-logind[1179]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:23:11.309136 systemd-logind[1179]: Removed session 21. Sep 6 00:23:14.451483 kubelet[1902]: E0906 00:23:14.451372 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:15.451127 kubelet[1902]: E0906 00:23:15.451058 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:15.453548 kubelet[1902]: E0906 00:23:15.453500 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:16.310746 systemd[1]: Started sshd@21-143.198.64.97:22-147.75.109.163:48134.service. Sep 6 00:23:16.364411 sshd[3469]: Accepted publickey for core from 147.75.109.163 port 48134 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:16.367048 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:16.374756 systemd[1]: Started session-22.scope. Sep 6 00:23:16.375465 systemd-logind[1179]: New session 22 of user core. Sep 6 00:23:16.526403 sshd[3469]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:16.532393 systemd[1]: sshd@21-143.198.64.97:22-147.75.109.163:48134.service: Deactivated successfully. Sep 6 00:23:16.533605 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:23:16.535325 systemd-logind[1179]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:23:16.536604 systemd-logind[1179]: Removed session 22. Sep 6 00:23:17.451080 kubelet[1902]: E0906 00:23:17.451018 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:21.451875 kubelet[1902]: E0906 00:23:21.451834 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:21.533976 systemd[1]: Started sshd@22-143.198.64.97:22-147.75.109.163:54892.service. Sep 6 00:23:21.590941 sshd[3484]: Accepted publickey for core from 147.75.109.163 port 54892 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:21.593225 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:21.599186 systemd-logind[1179]: New session 23 of user core. Sep 6 00:23:21.599323 systemd[1]: Started session-23.scope. Sep 6 00:23:21.737957 sshd[3484]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:21.745257 systemd[1]: Started sshd@23-143.198.64.97:22-147.75.109.163:54894.service. Sep 6 00:23:21.748434 systemd[1]: sshd@22-143.198.64.97:22-147.75.109.163:54892.service: Deactivated successfully. Sep 6 00:23:21.749707 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:23:21.753000 systemd-logind[1179]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:23:21.754615 systemd-logind[1179]: Removed session 23. Sep 6 00:23:21.801093 sshd[3495]: Accepted publickey for core from 147.75.109.163 port 54894 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:21.803546 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:21.811773 systemd[1]: Started session-24.scope. Sep 6 00:23:21.812482 systemd-logind[1179]: New session 24 of user core. Sep 6 00:23:23.530860 systemd[1]: run-containerd-runc-k8s.io-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc-runc.qqt5XD.mount: Deactivated successfully. Sep 6 00:23:23.545325 env[1191]: time="2025-09-06T00:23:23.545275539Z" level=info msg="StopContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" with timeout 30 (s)" Sep 6 00:23:23.548416 env[1191]: time="2025-09-06T00:23:23.548364944Z" level=info msg="Stop container \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" with signal terminated" Sep 6 00:23:23.576760 systemd[1]: cri-containerd-6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9.scope: Deactivated successfully. Sep 6 00:23:23.588661 env[1191]: time="2025-09-06T00:23:23.588585855Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:23:23.609498 env[1191]: time="2025-09-06T00:23:23.609443152Z" level=info msg="StopContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" with timeout 2 (s)" Sep 6 00:23:23.610472 env[1191]: time="2025-09-06T00:23:23.610428107Z" level=info msg="Stop container \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" with signal terminated" Sep 6 00:23:23.618081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9-rootfs.mount: Deactivated successfully. Sep 6 00:23:23.625011 env[1191]: time="2025-09-06T00:23:23.624951968Z" level=info msg="shim disconnected" id=6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9 Sep 6 00:23:23.626394 systemd-networkd[1007]: lxc_health: Link DOWN Sep 6 00:23:23.626400 systemd-networkd[1007]: lxc_health: Lost carrier Sep 6 00:23:23.628779 env[1191]: time="2025-09-06T00:23:23.628679140Z" level=warning msg="cleaning up after shim disconnected" id=6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9 namespace=k8s.io Sep 6 00:23:23.628779 env[1191]: time="2025-09-06T00:23:23.628733528Z" level=info msg="cleaning up dead shim" Sep 6 00:23:23.651772 env[1191]: time="2025-09-06T00:23:23.651716725Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3547 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.655203 env[1191]: time="2025-09-06T00:23:23.655088232Z" level=info msg="StopContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" returns successfully" Sep 6 00:23:23.664732 env[1191]: time="2025-09-06T00:23:23.660939496Z" level=info msg="StopPodSandbox for \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\"" Sep 6 00:23:23.664732 env[1191]: time="2025-09-06T00:23:23.661042911Z" level=info msg="Container to stop \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.663555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95-shm.mount: Deactivated successfully. Sep 6 00:23:23.670567 systemd[1]: cri-containerd-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc.scope: Deactivated successfully. Sep 6 00:23:23.670875 systemd[1]: cri-containerd-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc.scope: Consumed 9.720s CPU time. Sep 6 00:23:23.678523 systemd[1]: cri-containerd-c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95.scope: Deactivated successfully. Sep 6 00:23:23.708457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc-rootfs.mount: Deactivated successfully. Sep 6 00:23:23.714802 env[1191]: time="2025-09-06T00:23:23.714731233Z" level=info msg="shim disconnected" id=43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc Sep 6 00:23:23.714802 env[1191]: time="2025-09-06T00:23:23.714795964Z" level=warning msg="cleaning up after shim disconnected" id=43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc namespace=k8s.io Sep 6 00:23:23.714802 env[1191]: time="2025-09-06T00:23:23.714810162Z" level=info msg="cleaning up dead shim" Sep 6 00:23:23.727168 env[1191]: time="2025-09-06T00:23:23.727065580Z" level=info msg="shim disconnected" id=c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95 Sep 6 00:23:23.727168 env[1191]: time="2025-09-06T00:23:23.727165603Z" level=warning msg="cleaning up after shim disconnected" id=c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95 namespace=k8s.io Sep 6 00:23:23.727168 env[1191]: time="2025-09-06T00:23:23.727180828Z" level=info msg="cleaning up dead shim" Sep 6 00:23:23.743372 env[1191]: time="2025-09-06T00:23:23.743298617Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3599 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.750459 env[1191]: time="2025-09-06T00:23:23.746826141Z" level=info msg="TearDown network for sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" successfully" Sep 6 00:23:23.750459 env[1191]: time="2025-09-06T00:23:23.746942423Z" level=info msg="StopPodSandbox for \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" returns successfully" Sep 6 00:23:23.755759 env[1191]: time="2025-09-06T00:23:23.755657959Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3593 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.758386 env[1191]: time="2025-09-06T00:23:23.758322407Z" level=info msg="StopContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" returns successfully" Sep 6 00:23:23.760328 env[1191]: time="2025-09-06T00:23:23.760278360Z" level=info msg="StopPodSandbox for \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\"" Sep 6 00:23:23.760729 env[1191]: time="2025-09-06T00:23:23.760690402Z" level=info msg="Container to stop \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.760881 env[1191]: time="2025-09-06T00:23:23.760854560Z" level=info msg="Container to stop \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.761006 env[1191]: time="2025-09-06T00:23:23.760982180Z" level=info msg="Container to stop \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.761146 env[1191]: time="2025-09-06T00:23:23.761108148Z" level=info msg="Container to stop \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.761325 env[1191]: time="2025-09-06T00:23:23.761262018Z" level=info msg="Container to stop \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:23.774688 systemd[1]: cri-containerd-086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f.scope: Deactivated successfully. Sep 6 00:23:23.812522 env[1191]: time="2025-09-06T00:23:23.810332452Z" level=info msg="shim disconnected" id=086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f Sep 6 00:23:23.812522 env[1191]: time="2025-09-06T00:23:23.811167823Z" level=warning msg="cleaning up after shim disconnected" id=086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f namespace=k8s.io Sep 6 00:23:23.812522 env[1191]: time="2025-09-06T00:23:23.811229593Z" level=info msg="cleaning up dead shim" Sep 6 00:23:23.820596 kubelet[1902]: I0906 00:23:23.820544 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlr9z\" (UniqueName: \"kubernetes.io/projected/e0ff582c-4f11-4d6f-9a62-8296af15feec-kube-api-access-qlr9z\") pod \"e0ff582c-4f11-4d6f-9a62-8296af15feec\" (UID: \"e0ff582c-4f11-4d6f-9a62-8296af15feec\") " Sep 6 00:23:23.821230 kubelet[1902]: I0906 00:23:23.820871 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0ff582c-4f11-4d6f-9a62-8296af15feec-cilium-config-path\") pod \"e0ff582c-4f11-4d6f-9a62-8296af15feec\" (UID: \"e0ff582c-4f11-4d6f-9a62-8296af15feec\") " Sep 6 00:23:23.831376 env[1191]: time="2025-09-06T00:23:23.831307955Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.832003 env[1191]: time="2025-09-06T00:23:23.831959280Z" level=info msg="TearDown network for sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" successfully" Sep 6 00:23:23.832003 env[1191]: time="2025-09-06T00:23:23.831992630Z" level=info msg="StopPodSandbox for \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" returns successfully" Sep 6 00:23:23.833824 kubelet[1902]: I0906 00:23:23.832210 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ff582c-4f11-4d6f-9a62-8296af15feec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0ff582c-4f11-4d6f-9a62-8296af15feec" (UID: "e0ff582c-4f11-4d6f-9a62-8296af15feec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:23:23.842351 kubelet[1902]: I0906 00:23:23.842279 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ff582c-4f11-4d6f-9a62-8296af15feec-kube-api-access-qlr9z" (OuterVolumeSpecName: "kube-api-access-qlr9z") pod "e0ff582c-4f11-4d6f-9a62-8296af15feec" (UID: "e0ff582c-4f11-4d6f-9a62-8296af15feec"). InnerVolumeSpecName "kube-api-access-qlr9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:23:23.884280 systemd[1]: Removed slice kubepods-besteffort-pode0ff582c_4f11_4d6f_9a62_8296af15feec.slice. Sep 6 00:23:23.887445 kubelet[1902]: I0906 00:23:23.887093 1902 scope.go:117] "RemoveContainer" containerID="6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9" Sep 6 00:23:23.891157 env[1191]: time="2025-09-06T00:23:23.890950362Z" level=info msg="RemoveContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\"" Sep 6 00:23:23.894592 env[1191]: time="2025-09-06T00:23:23.894370466Z" level=info msg="RemoveContainer for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" returns successfully" Sep 6 00:23:23.896780 kubelet[1902]: I0906 00:23:23.896737 1902 scope.go:117] "RemoveContainer" containerID="6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9" Sep 6 00:23:23.897279 env[1191]: time="2025-09-06T00:23:23.897140542Z" level=error msg="ContainerStatus for \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\": not found" Sep 6 00:23:23.912209 kubelet[1902]: E0906 00:23:23.912137 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\": not found" containerID="6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9" Sep 6 00:23:23.912502 kubelet[1902]: I0906 00:23:23.912432 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9"} err="failed to get container status \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6dc1b19069646a8dede3231842cba6c811c6b476e848b19fa6b80c1e898fdcb9\": not found" Sep 6 00:23:23.912589 kubelet[1902]: I0906 00:23:23.912575 1902 scope.go:117] "RemoveContainer" containerID="43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc" Sep 6 00:23:23.919382 env[1191]: time="2025-09-06T00:23:23.919001033Z" level=info msg="RemoveContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\"" Sep 6 00:23:23.929612 kubelet[1902]: I0906 00:23:23.929203 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-478g8\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-kube-api-access-478g8\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.932889 kubelet[1902]: I0906 00:23:23.932824 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-net\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.932889 kubelet[1902]: I0906 00:23:23.932894 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-cgroup\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933085 kubelet[1902]: I0906 00:23:23.932961 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-xtables-lock\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933085 kubelet[1902]: I0906 00:23:23.933015 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hostproc\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933085 kubelet[1902]: I0906 00:23:23.933037 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-run\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933085 kubelet[1902]: I0906 00:23:23.933073 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-bpf-maps\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933247 kubelet[1902]: I0906 00:23:23.933130 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933247 kubelet[1902]: I0906 00:23:23.933159 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbaaf578-ccb9-45a0-9904-24b93546cfa1-clustermesh-secrets\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933247 kubelet[1902]: I0906 00:23:23.933201 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cni-path\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933247 kubelet[1902]: I0906 00:23:23.933224 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-lib-modules\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933365 kubelet[1902]: I0906 00:23:23.933246 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-etc-cni-netd\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933365 kubelet[1902]: I0906 00:23:23.933289 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-config-path\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933365 kubelet[1902]: I0906 00:23:23.933314 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-kernel\") pod \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\" (UID: \"dbaaf578-ccb9-45a0-9904-24b93546cfa1\") " Sep 6 00:23:23.933449 kubelet[1902]: I0906 00:23:23.933400 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0ff582c-4f11-4d6f-9a62-8296af15feec-cilium-config-path\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:23.933449 kubelet[1902]: I0906 00:23:23.933434 1902 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlr9z\" (UniqueName: \"kubernetes.io/projected/e0ff582c-4f11-4d6f-9a62-8296af15feec-kube-api-access-qlr9z\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:23.933527 kubelet[1902]: I0906 00:23:23.933487 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933588 kubelet[1902]: I0906 00:23:23.933565 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933641 kubelet[1902]: I0906 00:23:23.933613 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933690 kubelet[1902]: I0906 00:23:23.933636 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933690 kubelet[1902]: I0906 00:23:23.933656 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hostproc" (OuterVolumeSpecName: "hostproc") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933765 kubelet[1902]: I0906 00:23:23.933700 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.933765 kubelet[1902]: I0906 00:23:23.933721 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.934910 env[1191]: time="2025-09-06T00:23:23.934866430Z" level=info msg="RemoveContainer for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" returns successfully" Sep 6 00:23:23.939288 kubelet[1902]: I0906 00:23:23.939202 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-kube-api-access-478g8" (OuterVolumeSpecName: "kube-api-access-478g8") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "kube-api-access-478g8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:23:23.939874 kubelet[1902]: I0906 00:23:23.939669 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:23:23.940825 kubelet[1902]: I0906 00:23:23.940788 1902 scope.go:117] "RemoveContainer" containerID="6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960" Sep 6 00:23:23.941417 kubelet[1902]: I0906 00:23:23.941388 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.941814 kubelet[1902]: I0906 00:23:23.941790 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cni-path" (OuterVolumeSpecName: "cni-path") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.942381 kubelet[1902]: I0906 00:23:23.942360 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:23.943409 env[1191]: time="2025-09-06T00:23:23.943360420Z" level=info msg="RemoveContainer for \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\"" Sep 6 00:23:23.946257 env[1191]: time="2025-09-06T00:23:23.946121655Z" level=info msg="RemoveContainer for \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\" returns successfully" Sep 6 00:23:23.946884 kubelet[1902]: I0906 00:23:23.946845 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:23:23.947174 kubelet[1902]: I0906 00:23:23.947155 1902 scope.go:117] "RemoveContainer" containerID="77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7" Sep 6 00:23:23.949174 env[1191]: time="2025-09-06T00:23:23.948785733Z" level=info msg="RemoveContainer for \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\"" Sep 6 00:23:23.949291 kubelet[1902]: I0906 00:23:23.949215 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbaaf578-ccb9-45a0-9904-24b93546cfa1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dbaaf578-ccb9-45a0-9904-24b93546cfa1" (UID: "dbaaf578-ccb9-45a0-9904-24b93546cfa1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:23:23.951659 env[1191]: time="2025-09-06T00:23:23.951607267Z" level=info msg="RemoveContainer for \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\" returns successfully" Sep 6 00:23:23.952390 kubelet[1902]: I0906 00:23:23.952307 1902 scope.go:117] "RemoveContainer" containerID="296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf" Sep 6 00:23:23.954078 env[1191]: time="2025-09-06T00:23:23.954032344Z" level=info msg="RemoveContainer for \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\"" Sep 6 00:23:23.956878 env[1191]: time="2025-09-06T00:23:23.956830608Z" level=info msg="RemoveContainer for \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\" returns successfully" Sep 6 00:23:23.957285 kubelet[1902]: I0906 00:23:23.957263 1902 scope.go:117] "RemoveContainer" containerID="1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010" Sep 6 00:23:23.958806 env[1191]: time="2025-09-06T00:23:23.958767875Z" level=info msg="RemoveContainer for \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\"" Sep 6 00:23:23.961713 env[1191]: time="2025-09-06T00:23:23.961660651Z" level=info msg="RemoveContainer for \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\" returns successfully" Sep 6 00:23:23.962207 kubelet[1902]: I0906 00:23:23.962185 1902 scope.go:117] "RemoveContainer" containerID="43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc" Sep 6 00:23:23.962611 env[1191]: time="2025-09-06T00:23:23.962548450Z" level=error msg="ContainerStatus for \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\": not found" Sep 6 00:23:23.962936 kubelet[1902]: E0906 00:23:23.962914 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\": not found" containerID="43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc" Sep 6 00:23:23.963068 kubelet[1902]: I0906 00:23:23.963042 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc"} err="failed to get container status \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"43c451b60d27be78cfcdf93ae068d42e6b7b4a1f6d03ead2320d19a8f4f096dc\": not found" Sep 6 00:23:23.963170 kubelet[1902]: I0906 00:23:23.963155 1902 scope.go:117] "RemoveContainer" containerID="6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960" Sep 6 00:23:23.963617 env[1191]: time="2025-09-06T00:23:23.963515653Z" level=error msg="ContainerStatus for \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\": not found" Sep 6 00:23:23.963930 kubelet[1902]: E0906 00:23:23.963910 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\": not found" containerID="6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960" Sep 6 00:23:23.964029 kubelet[1902]: I0906 00:23:23.964006 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960"} err="failed to get container status \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c95ec5925e575ad916d27eb1166bb1f53b3afd6b822dc478e7e009e3fbe8960\": not found" Sep 6 00:23:23.964130 kubelet[1902]: I0906 00:23:23.964087 1902 scope.go:117] "RemoveContainer" containerID="77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7" Sep 6 00:23:23.964607 env[1191]: time="2025-09-06T00:23:23.964524556Z" level=error msg="ContainerStatus for \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\": not found" Sep 6 00:23:23.964767 kubelet[1902]: E0906 00:23:23.964741 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\": not found" containerID="77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7" Sep 6 00:23:23.964870 kubelet[1902]: I0906 00:23:23.964850 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7"} err="failed to get container status \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\": rpc error: code = NotFound desc = an error occurred when try to find container \"77d3df5f06f1bf0628aa1ec5c0a91b28cc4733c570d1e947941b2c612a6beec7\": not found" Sep 6 00:23:23.964956 kubelet[1902]: I0906 00:23:23.964944 1902 scope.go:117] "RemoveContainer" containerID="296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf" Sep 6 00:23:23.965438 env[1191]: time="2025-09-06T00:23:23.965352343Z" level=error msg="ContainerStatus for \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\": not found" Sep 6 00:23:23.965612 kubelet[1902]: E0906 00:23:23.965593 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\": not found" containerID="296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf" Sep 6 00:23:23.965723 kubelet[1902]: I0906 00:23:23.965701 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf"} err="failed to get container status \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"296b0a9c7cc592af1a6a9b170c573535a8f30538b9e5cf481dac5751bbf0ebbf\": not found" Sep 6 00:23:23.965794 kubelet[1902]: I0906 00:23:23.965782 1902 scope.go:117] "RemoveContainer" containerID="1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010" Sep 6 00:23:23.966344 env[1191]: time="2025-09-06T00:23:23.966257171Z" level=error msg="ContainerStatus for \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\": not found" Sep 6 00:23:23.966494 kubelet[1902]: E0906 00:23:23.966478 1902 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\": not found" containerID="1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010" Sep 6 00:23:23.966591 kubelet[1902]: I0906 00:23:23.966571 1902 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010"} err="failed to get container status \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f240ceee1e6191ba0e78a0cfb6952a346c9c22ac3341abb9d4233e33746b010\": not found" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034481 1902 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-478g8\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-kube-api-access-478g8\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034522 1902 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-net\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034539 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-cgroup\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034552 1902 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-xtables-lock\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034564 1902 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hostproc\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034576 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-run\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034588 1902 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-bpf-maps\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034612 kubelet[1902]: I0906 00:23:24.034603 1902 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbaaf578-ccb9-45a0-9904-24b93546cfa1-hubble-tls\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034615 1902 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbaaf578-ccb9-45a0-9904-24b93546cfa1-clustermesh-secrets\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034627 1902 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cni-path\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034640 1902 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-lib-modules\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034655 1902 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-etc-cni-netd\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034667 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbaaf578-ccb9-45a0-9904-24b93546cfa1-cilium-config-path\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.034987 kubelet[1902]: I0906 00:23:24.034679 1902 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbaaf578-ccb9-45a0-9904-24b93546cfa1-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:24.201395 systemd[1]: Removed slice kubepods-burstable-poddbaaf578_ccb9_45a0_9904_24b93546cfa1.slice. Sep 6 00:23:24.201496 systemd[1]: kubepods-burstable-poddbaaf578_ccb9_45a0_9904_24b93546cfa1.slice: Consumed 9.870s CPU time. Sep 6 00:23:24.524489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f-rootfs.mount: Deactivated successfully. Sep 6 00:23:24.525049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f-shm.mount: Deactivated successfully. Sep 6 00:23:24.525394 systemd[1]: var-lib-kubelet-pods-dbaaf578\x2dccb9\x2d45a0\x2d9904\x2d24b93546cfa1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:23:24.525702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95-rootfs.mount: Deactivated successfully. Sep 6 00:23:24.525995 systemd[1]: var-lib-kubelet-pods-dbaaf578\x2dccb9\x2d45a0\x2d9904\x2d24b93546cfa1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:23:24.526265 systemd[1]: var-lib-kubelet-pods-e0ff582c\x2d4f11\x2d4d6f\x2d9a62\x2d8296af15feec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqlr9z.mount: Deactivated successfully. Sep 6 00:23:24.526533 systemd[1]: var-lib-kubelet-pods-dbaaf578\x2dccb9\x2d45a0\x2d9904\x2d24b93546cfa1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d478g8.mount: Deactivated successfully. Sep 6 00:23:25.406149 sshd[3495]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:25.415433 systemd[1]: Started sshd@24-143.198.64.97:22-147.75.109.163:54898.service. Sep 6 00:23:25.416877 systemd[1]: sshd@23-143.198.64.97:22-147.75.109.163:54894.service: Deactivated successfully. Sep 6 00:23:25.419151 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:23:25.424367 systemd-logind[1179]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:23:25.426724 systemd-logind[1179]: Removed session 24. Sep 6 00:23:25.453587 kubelet[1902]: I0906 00:23:25.453551 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbaaf578-ccb9-45a0-9904-24b93546cfa1" path="/var/lib/kubelet/pods/dbaaf578-ccb9-45a0-9904-24b93546cfa1/volumes" Sep 6 00:23:25.455221 kubelet[1902]: I0906 00:23:25.455184 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ff582c-4f11-4d6f-9a62-8296af15feec" path="/var/lib/kubelet/pods/e0ff582c-4f11-4d6f-9a62-8296af15feec/volumes" Sep 6 00:23:25.484702 sshd[3657]: Accepted publickey for core from 147.75.109.163 port 54898 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:25.486342 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:25.493264 systemd-logind[1179]: New session 25 of user core. Sep 6 00:23:25.493786 systemd[1]: Started session-25.scope. Sep 6 00:23:26.314434 sshd[3657]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:26.327389 systemd[1]: Started sshd@25-143.198.64.97:22-147.75.109.163:54908.service. Sep 6 00:23:26.328840 systemd[1]: sshd@24-143.198.64.97:22-147.75.109.163:54898.service: Deactivated successfully. Sep 6 00:23:26.331599 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:23:26.335341 systemd-logind[1179]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:23:26.337667 systemd-logind[1179]: Removed session 25. Sep 6 00:23:26.393541 sshd[3668]: Accepted publickey for core from 147.75.109.163 port 54908 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:26.395139 sshd[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:26.406341 systemd[1]: Started session-26.scope. Sep 6 00:23:26.407148 systemd-logind[1179]: New session 26 of user core. Sep 6 00:23:26.425927 systemd[1]: Created slice kubepods-burstable-podd8653713_a6f3_4189_bdc3_d10f8e1c807b.slice. Sep 6 00:23:26.451146 kubelet[1902]: E0906 00:23:26.451084 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:26.453255 kubelet[1902]: I0906 00:23:26.453205 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-net\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.453543 kubelet[1902]: I0906 00:23:26.453508 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-run\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.453732 kubelet[1902]: I0906 00:23:26.453707 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-bpf-maps\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454169 kubelet[1902]: I0906 00:23:26.454146 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-kernel\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454300 kubelet[1902]: I0906 00:23:26.454277 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hostproc\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454473 kubelet[1902]: I0906 00:23:26.454449 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-etc-cni-netd\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454618 kubelet[1902]: I0906 00:23:26.454595 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-clustermesh-secrets\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454762 kubelet[1902]: I0906 00:23:26.454738 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-config-path\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.454912 kubelet[1902]: I0906 00:23:26.454889 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-lib-modules\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455019 kubelet[1902]: I0906 00:23:26.455004 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hubble-tls\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455194 kubelet[1902]: I0906 00:23:26.455158 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-cgroup\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455335 kubelet[1902]: I0906 00:23:26.455310 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cni-path\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455499 kubelet[1902]: I0906 00:23:26.455472 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-xtables-lock\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455645 kubelet[1902]: I0906 00:23:26.455621 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-ipsec-secrets\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.455777 kubelet[1902]: I0906 00:23:26.455755 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfxj\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-kube-api-access-4kfxj\") pod \"cilium-grctj\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " pod="kube-system/cilium-grctj" Sep 6 00:23:26.600830 kubelet[1902]: E0906 00:23:26.600692 1902 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:23:26.695349 sshd[3668]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:26.700450 systemd[1]: sshd@25-143.198.64.97:22-147.75.109.163:54908.service: Deactivated successfully. Sep 6 00:23:26.702310 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:23:26.704169 systemd-logind[1179]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:23:26.710308 systemd[1]: Started sshd@26-143.198.64.97:22-147.75.109.163:54918.service. Sep 6 00:23:26.712422 systemd-logind[1179]: Removed session 26. Sep 6 00:23:26.718083 kubelet[1902]: E0906 00:23:26.718042 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:26.718727 env[1191]: time="2025-09-06T00:23:26.718688381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grctj,Uid:d8653713-a6f3-4189-bdc3-d10f8e1c807b,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:26.751515 env[1191]: time="2025-09-06T00:23:26.751423104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:26.751693 env[1191]: time="2025-09-06T00:23:26.751538798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:26.751693 env[1191]: time="2025-09-06T00:23:26.751565071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:26.753795 env[1191]: time="2025-09-06T00:23:26.751956104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47 pid=3694 runtime=io.containerd.runc.v2 Sep 6 00:23:26.773266 systemd[1]: Started cri-containerd-0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47.scope. Sep 6 00:23:26.777053 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:26.777930 sshd[3685]: Accepted publickey for core from 147.75.109.163 port 54918 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:23:26.790680 systemd[1]: Started session-27.scope. Sep 6 00:23:26.791188 systemd-logind[1179]: New session 27 of user core. Sep 6 00:23:26.829417 env[1191]: time="2025-09-06T00:23:26.829373436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grctj,Uid:d8653713-a6f3-4189-bdc3-d10f8e1c807b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\"" Sep 6 00:23:26.830703 kubelet[1902]: E0906 00:23:26.830654 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:26.835380 env[1191]: time="2025-09-06T00:23:26.835336456Z" level=info msg="CreateContainer within sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:23:26.844739 env[1191]: time="2025-09-06T00:23:26.844665571Z" level=info msg="CreateContainer within sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\"" Sep 6 00:23:26.845792 env[1191]: time="2025-09-06T00:23:26.845686921Z" level=info msg="StartContainer for \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\"" Sep 6 00:23:26.872698 systemd[1]: Started cri-containerd-52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc.scope. Sep 6 00:23:26.897746 systemd[1]: cri-containerd-52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc.scope: Deactivated successfully. Sep 6 00:23:26.915125 env[1191]: time="2025-09-06T00:23:26.915039939Z" level=info msg="shim disconnected" id=52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc Sep 6 00:23:26.915125 env[1191]: time="2025-09-06T00:23:26.915116940Z" level=warning msg="cleaning up after shim disconnected" id=52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc namespace=k8s.io Sep 6 00:23:26.915125 env[1191]: time="2025-09-06T00:23:26.915129520Z" level=info msg="cleaning up dead shim" Sep 6 00:23:26.927148 env[1191]: time="2025-09-06T00:23:26.926940847Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3760 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:23:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:23:26.927443 env[1191]: time="2025-09-06T00:23:26.927241528Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 6 00:23:26.927648 env[1191]: time="2025-09-06T00:23:26.927616269Z" level=error msg="Failed to pipe stdout of container \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\"" error="reading from a closed fifo" Sep 6 00:23:26.927718 env[1191]: time="2025-09-06T00:23:26.927680939Z" level=error msg="Failed to pipe stderr of container \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\"" error="reading from a closed fifo" Sep 6 00:23:26.931129 env[1191]: time="2025-09-06T00:23:26.930924281Z" level=error msg="StartContainer for \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:23:26.931290 kubelet[1902]: E0906 00:23:26.931246 1902 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc" Sep 6 00:23:26.935735 kubelet[1902]: E0906 00:23:26.935579 1902 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 6 00:23:26.935735 kubelet[1902]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:23:26.935735 kubelet[1902]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:23:26.935735 kubelet[1902]: rm /hostbin/cilium-mount Sep 6 00:23:26.936015 kubelet[1902]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kfxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-grctj_kube-system(d8653713-a6f3-4189-bdc3-d10f8e1c807b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:23:26.936015 kubelet[1902]: > logger="UnhandledError" Sep 6 00:23:26.938088 kubelet[1902]: E0906 00:23:26.937284 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-grctj" podUID="d8653713-a6f3-4189-bdc3-d10f8e1c807b" Sep 6 00:23:27.910495 env[1191]: time="2025-09-06T00:23:27.910450805Z" level=info msg="StopPodSandbox for \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\"" Sep 6 00:23:27.914039 env[1191]: time="2025-09-06T00:23:27.910511417Z" level=info msg="Container to stop \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:23:27.912824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47-shm.mount: Deactivated successfully. Sep 6 00:23:27.922673 systemd[1]: cri-containerd-0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47.scope: Deactivated successfully. Sep 6 00:23:27.954245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47-rootfs.mount: Deactivated successfully. Sep 6 00:23:27.958846 env[1191]: time="2025-09-06T00:23:27.958772896Z" level=info msg="shim disconnected" id=0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47 Sep 6 00:23:27.958846 env[1191]: time="2025-09-06T00:23:27.958834173Z" level=warning msg="cleaning up after shim disconnected" id=0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47 namespace=k8s.io Sep 6 00:23:27.958846 env[1191]: time="2025-09-06T00:23:27.958845236Z" level=info msg="cleaning up dead shim" Sep 6 00:23:27.970848 env[1191]: time="2025-09-06T00:23:27.970778812Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Sep 6 00:23:27.971213 env[1191]: time="2025-09-06T00:23:27.971176727Z" level=info msg="TearDown network for sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" successfully" Sep 6 00:23:27.971213 env[1191]: time="2025-09-06T00:23:27.971209710Z" level=info msg="StopPodSandbox for \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" returns successfully" Sep 6 00:23:28.068830 kubelet[1902]: I0906 00:23:28.068772 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-bpf-maps\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.068830 kubelet[1902]: I0906 00:23:28.068823 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-etc-cni-netd\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068849 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-kernel\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068863 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hostproc\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068883 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-lib-modules\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068913 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-clustermesh-secrets\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068930 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-config-path\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068975 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hubble-tls\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.068990 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cni-path\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069004 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-xtables-lock\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069029 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-ipsec-secrets\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069054 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-net\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069077 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-run\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069127 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-cgroup\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.069479 kubelet[1902]: I0906 00:23:28.069144 1902 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kfxj\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-kube-api-access-4kfxj\") pod \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\" (UID: \"d8653713-a6f3-4189-bdc3-d10f8e1c807b\") " Sep 6 00:23:28.071947 kubelet[1902]: I0906 00:23:28.071895 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:23:28.072113 kubelet[1902]: I0906 00:23:28.071971 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072113 kubelet[1902]: I0906 00:23:28.071993 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072113 kubelet[1902]: I0906 00:23:28.072008 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072113 kubelet[1902]: I0906 00:23:28.072025 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072113 kubelet[1902]: I0906 00:23:28.072039 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072772 kubelet[1902]: I0906 00:23:28.072730 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.072772 kubelet[1902]: I0906 00:23:28.072777 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.077084 systemd[1]: var-lib-kubelet-pods-d8653713\x2da6f3\x2d4189\x2dbdc3\x2dd10f8e1c807b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4kfxj.mount: Deactivated successfully. Sep 6 00:23:28.079036 kubelet[1902]: I0906 00:23:28.078986 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.079318 kubelet[1902]: I0906 00:23:28.079297 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.079415 kubelet[1902]: I0906 00:23:28.079381 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-kube-api-access-4kfxj" (OuterVolumeSpecName: "kube-api-access-4kfxj") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "kube-api-access-4kfxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:23:28.079502 kubelet[1902]: I0906 00:23:28.079486 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:23:28.082958 systemd[1]: var-lib-kubelet-pods-d8653713\x2da6f3\x2d4189\x2dbdc3\x2dd10f8e1c807b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:23:28.085380 kubelet[1902]: I0906 00:23:28.085320 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:23:28.086074 kubelet[1902]: I0906 00:23:28.086029 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:23:28.086433 kubelet[1902]: I0906 00:23:28.086397 1902 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d8653713-a6f3-4189-bdc3-d10f8e1c807b" (UID: "d8653713-a6f3-4189-bdc3-d10f8e1c807b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:23:28.169732 kubelet[1902]: I0906 00:23:28.169597 1902 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170078 kubelet[1902]: I0906 00:23:28.170040 1902 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hostproc\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170233 kubelet[1902]: I0906 00:23:28.170219 1902 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-lib-modules\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170341 kubelet[1902]: I0906 00:23:28.170328 1902 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-clustermesh-secrets\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170453 kubelet[1902]: I0906 00:23:28.170428 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-config-path\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170590 kubelet[1902]: I0906 00:23:28.170566 1902 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-hubble-tls\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170732 kubelet[1902]: I0906 00:23:28.170713 1902 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cni-path\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170832 kubelet[1902]: I0906 00:23:28.170820 1902 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-xtables-lock\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.170951 kubelet[1902]: I0906 00:23:28.170931 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171076 kubelet[1902]: I0906 00:23:28.171060 1902 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-host-proc-sys-net\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171225 kubelet[1902]: I0906 00:23:28.171205 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-run\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171366 kubelet[1902]: I0906 00:23:28.171352 1902 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-cilium-cgroup\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171464 kubelet[1902]: I0906 00:23:28.171450 1902 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4kfxj\" (UniqueName: \"kubernetes.io/projected/d8653713-a6f3-4189-bdc3-d10f8e1c807b-kube-api-access-4kfxj\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171548 kubelet[1902]: I0906 00:23:28.171536 1902 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-bpf-maps\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.171639 kubelet[1902]: I0906 00:23:28.171628 1902 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8653713-a6f3-4189-bdc3-d10f8e1c807b-etc-cni-netd\") on node \"ci-3510.3.8-n-f7f83b6e50\" DevicePath \"\"" Sep 6 00:23:28.563028 systemd[1]: var-lib-kubelet-pods-d8653713\x2da6f3\x2d4189\x2dbdc3\x2dd10f8e1c807b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:23:28.563450 systemd[1]: var-lib-kubelet-pods-d8653713\x2da6f3\x2d4189\x2dbdc3\x2dd10f8e1c807b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:23:28.914400 kubelet[1902]: I0906 00:23:28.914355 1902 scope.go:117] "RemoveContainer" containerID="52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc" Sep 6 00:23:28.919668 env[1191]: time="2025-09-06T00:23:28.919283755Z" level=info msg="RemoveContainer for \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\"" Sep 6 00:23:28.921427 systemd[1]: Removed slice kubepods-burstable-podd8653713_a6f3_4189_bdc3_d10f8e1c807b.slice. Sep 6 00:23:28.925011 env[1191]: time="2025-09-06T00:23:28.924951253Z" level=info msg="RemoveContainer for \"52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc\" returns successfully" Sep 6 00:23:28.998441 systemd[1]: Created slice kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice. Sep 6 00:23:29.078933 kubelet[1902]: I0906 00:23:29.078868 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-cni-path\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.078933 kubelet[1902]: I0906 00:23:29.078934 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-lib-modules\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.078969 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-cilium-run\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.078991 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-bpf-maps\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079015 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e36e8a5b-a9b8-46ab-957c-6f4accca1437-clustermesh-secrets\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079040 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67mb\" (UniqueName: \"kubernetes.io/projected/e36e8a5b-a9b8-46ab-957c-6f4accca1437-kube-api-access-v67mb\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079068 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e36e8a5b-a9b8-46ab-957c-6f4accca1437-cilium-ipsec-secrets\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079108 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-host-proc-sys-net\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079134 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e36e8a5b-a9b8-46ab-957c-6f4accca1437-cilium-config-path\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079162 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-host-proc-sys-kernel\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079183 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e36e8a5b-a9b8-46ab-957c-6f4accca1437-hubble-tls\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079208 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-cilium-cgroup\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079229 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-etc-cni-netd\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079253 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-hostproc\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.079452 kubelet[1902]: I0906 00:23:29.079276 1902 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36e8a5b-a9b8-46ab-957c-6f4accca1437-xtables-lock\") pod \"cilium-tw6nx\" (UID: \"e36e8a5b-a9b8-46ab-957c-6f4accca1437\") " pod="kube-system/cilium-tw6nx" Sep 6 00:23:29.303185 kubelet[1902]: E0906 00:23:29.302990 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:29.303704 env[1191]: time="2025-09-06T00:23:29.303656710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tw6nx,Uid:e36e8a5b-a9b8-46ab-957c-6f4accca1437,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:29.323944 env[1191]: time="2025-09-06T00:23:29.323821355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:29.323944 env[1191]: time="2025-09-06T00:23:29.323865740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:29.323944 env[1191]: time="2025-09-06T00:23:29.323877087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:29.324684 env[1191]: time="2025-09-06T00:23:29.324497644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df pid=3823 runtime=io.containerd.runc.v2 Sep 6 00:23:29.341323 systemd[1]: Started cri-containerd-2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df.scope. Sep 6 00:23:29.384810 env[1191]: time="2025-09-06T00:23:29.384677398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tw6nx,Uid:e36e8a5b-a9b8-46ab-957c-6f4accca1437,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\"" Sep 6 00:23:29.386145 kubelet[1902]: E0906 00:23:29.385815 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:29.396669 env[1191]: time="2025-09-06T00:23:29.396609263Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:23:29.413018 env[1191]: time="2025-09-06T00:23:29.412942824Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d\"" Sep 6 00:23:29.415352 env[1191]: time="2025-09-06T00:23:29.413727634Z" level=info msg="StartContainer for \"69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d\"" Sep 6 00:23:29.439129 systemd[1]: Started cri-containerd-69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d.scope. Sep 6 00:23:29.456992 kubelet[1902]: I0906 00:23:29.456934 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8653713-a6f3-4189-bdc3-d10f8e1c807b" path="/var/lib/kubelet/pods/d8653713-a6f3-4189-bdc3-d10f8e1c807b/volumes" Sep 6 00:23:29.491748 env[1191]: time="2025-09-06T00:23:29.490486100Z" level=info msg="StartContainer for \"69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d\" returns successfully" Sep 6 00:23:29.518269 systemd[1]: cri-containerd-69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d.scope: Deactivated successfully. Sep 6 00:23:29.555805 env[1191]: time="2025-09-06T00:23:29.555640631Z" level=info msg="shim disconnected" id=69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d Sep 6 00:23:29.555805 env[1191]: time="2025-09-06T00:23:29.555693983Z" level=warning msg="cleaning up after shim disconnected" id=69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d namespace=k8s.io Sep 6 00:23:29.555805 env[1191]: time="2025-09-06T00:23:29.555706447Z" level=info msg="cleaning up dead shim" Sep 6 00:23:29.576035 env[1191]: time="2025-09-06T00:23:29.575951640Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3905 runtime=io.containerd.runc.v2\n" Sep 6 00:23:29.919130 kubelet[1902]: E0906 00:23:29.918692 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:29.922984 env[1191]: time="2025-09-06T00:23:29.922668252Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:23:29.937832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669641992.mount: Deactivated successfully. Sep 6 00:23:29.944550 env[1191]: time="2025-09-06T00:23:29.944498972Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5\"" Sep 6 00:23:29.945574 env[1191]: time="2025-09-06T00:23:29.945537155Z" level=info msg="StartContainer for \"d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5\"" Sep 6 00:23:29.977810 systemd[1]: Started cri-containerd-d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5.scope. Sep 6 00:23:30.021217 kubelet[1902]: W0906 00:23:30.021163 1902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8653713_a6f3_4189_bdc3_d10f8e1c807b.slice/cri-containerd-52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc.scope WatchSource:0}: container "52a0a4d87c59df8e0d88bd443c10fdc803969233ab73105bb3e723df176976dc" in namespace "k8s.io": not found Sep 6 00:23:30.032431 env[1191]: time="2025-09-06T00:23:30.032315858Z" level=info msg="StartContainer for \"d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5\" returns successfully" Sep 6 00:23:30.045501 systemd[1]: cri-containerd-d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5.scope: Deactivated successfully. Sep 6 00:23:30.079202 env[1191]: time="2025-09-06T00:23:30.079147598Z" level=info msg="shim disconnected" id=d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5 Sep 6 00:23:30.079202 env[1191]: time="2025-09-06T00:23:30.079193389Z" level=warning msg="cleaning up after shim disconnected" id=d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5 namespace=k8s.io Sep 6 00:23:30.079202 env[1191]: time="2025-09-06T00:23:30.079202614Z" level=info msg="cleaning up dead shim" Sep 6 00:23:30.093787 env[1191]: time="2025-09-06T00:23:30.093702227Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3968 runtime=io.containerd.runc.v2\n" Sep 6 00:23:30.563413 systemd[1]: run-containerd-runc-k8s.io-d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5-runc.YwLWuL.mount: Deactivated successfully. Sep 6 00:23:30.563540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5-rootfs.mount: Deactivated successfully. Sep 6 00:23:30.926626 kubelet[1902]: E0906 00:23:30.926586 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:30.933168 env[1191]: time="2025-09-06T00:23:30.933081408Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:23:30.947075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342700898.mount: Deactivated successfully. Sep 6 00:23:30.954192 env[1191]: time="2025-09-06T00:23:30.954140759Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c\"" Sep 6 00:23:30.955163 env[1191]: time="2025-09-06T00:23:30.955128841Z" level=info msg="StartContainer for \"3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c\"" Sep 6 00:23:30.979884 systemd[1]: Started cri-containerd-3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c.scope. Sep 6 00:23:31.034560 env[1191]: time="2025-09-06T00:23:31.034483980Z" level=info msg="StartContainer for \"3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c\" returns successfully" Sep 6 00:23:31.042446 systemd[1]: cri-containerd-3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c.scope: Deactivated successfully. Sep 6 00:23:31.090808 env[1191]: time="2025-09-06T00:23:31.090743412Z" level=info msg="shim disconnected" id=3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c Sep 6 00:23:31.090808 env[1191]: time="2025-09-06T00:23:31.090800211Z" level=warning msg="cleaning up after shim disconnected" id=3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c namespace=k8s.io Sep 6 00:23:31.090808 env[1191]: time="2025-09-06T00:23:31.090809851Z" level=info msg="cleaning up dead shim" Sep 6 00:23:31.101634 env[1191]: time="2025-09-06T00:23:31.101564057Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Sep 6 00:23:31.563725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c-rootfs.mount: Deactivated successfully. Sep 6 00:23:31.602586 kubelet[1902]: E0906 00:23:31.602525 1902 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:23:31.930732 kubelet[1902]: E0906 00:23:31.930669 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:31.937059 env[1191]: time="2025-09-06T00:23:31.937007391Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:23:31.958412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947053326.mount: Deactivated successfully. Sep 6 00:23:31.968609 env[1191]: time="2025-09-06T00:23:31.968554843Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a\"" Sep 6 00:23:31.969753 env[1191]: time="2025-09-06T00:23:31.969720900Z" level=info msg="StartContainer for \"0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a\"" Sep 6 00:23:32.002338 systemd[1]: Started cri-containerd-0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a.scope. Sep 6 00:23:32.043877 systemd[1]: cri-containerd-0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a.scope: Deactivated successfully. Sep 6 00:23:32.045829 env[1191]: time="2025-09-06T00:23:32.045673366Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice/cri-containerd-0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a.scope/memory.events\": no such file or directory" Sep 6 00:23:32.048053 env[1191]: time="2025-09-06T00:23:32.047968142Z" level=info msg="StartContainer for \"0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a\" returns successfully" Sep 6 00:23:32.076680 env[1191]: time="2025-09-06T00:23:32.076624879Z" level=info msg="shim disconnected" id=0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a Sep 6 00:23:32.076680 env[1191]: time="2025-09-06T00:23:32.076672752Z" level=warning msg="cleaning up after shim disconnected" id=0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a namespace=k8s.io Sep 6 00:23:32.076680 env[1191]: time="2025-09-06T00:23:32.076682571Z" level=info msg="cleaning up dead shim" Sep 6 00:23:32.087423 env[1191]: time="2025-09-06T00:23:32.087355956Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4085 runtime=io.containerd.runc.v2\n" Sep 6 00:23:32.563684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a-rootfs.mount: Deactivated successfully. Sep 6 00:23:32.936578 kubelet[1902]: E0906 00:23:32.936510 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:32.942258 env[1191]: time="2025-09-06T00:23:32.942214251Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:23:32.968178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767265409.mount: Deactivated successfully. Sep 6 00:23:32.975878 env[1191]: time="2025-09-06T00:23:32.975733801Z" level=info msg="CreateContainer within sandbox \"2dff932ef887b4344b2315a7aaf3278e51651ce847d9df93833fb6a0567938df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4\"" Sep 6 00:23:32.977076 env[1191]: time="2025-09-06T00:23:32.977027830Z" level=info msg="StartContainer for \"04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4\"" Sep 6 00:23:33.016655 systemd[1]: Started cri-containerd-04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4.scope. Sep 6 00:23:33.142495 kubelet[1902]: W0906 00:23:33.142448 1902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice/cri-containerd-69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d.scope WatchSource:0}: task 69833720343ed3b2eacacc4b0c8f80922881ef5d1e3c1c8eedc211d53ff5f53d not found Sep 6 00:23:33.162260 env[1191]: time="2025-09-06T00:23:33.162202251Z" level=info msg="StartContainer for \"04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4\" returns successfully" Sep 6 00:23:33.756283 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:23:33.946030 kubelet[1902]: E0906 00:23:33.945731 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:33.973434 kubelet[1902]: I0906 00:23:33.973326 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tw6nx" podStartSLOduration=5.973301648 podStartE2EDuration="5.973301648s" podCreationTimestamp="2025-09-06 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:33.97306845 +0000 UTC m=+112.765317723" watchObservedRunningTime="2025-09-06 00:23:33.973301648 +0000 UTC m=+112.765550896" Sep 6 00:23:34.541161 kubelet[1902]: I0906 00:23:34.541053 1902 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-f7f83b6e50" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:23:34Z","lastTransitionTime":"2025-09-06T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:23:35.305343 kubelet[1902]: E0906 00:23:35.305306 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:35.378698 systemd[1]: run-containerd-runc-k8s.io-04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4-runc.6P3y9t.mount: Deactivated successfully. Sep 6 00:23:36.252737 kubelet[1902]: W0906 00:23:36.252680 1902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice/cri-containerd-d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5.scope WatchSource:0}: task d683d8d731cfaeefae4b11e629de8dc4756d6c45de56539ccb486834e43e1be5 not found Sep 6 00:23:37.146500 systemd-networkd[1007]: lxc_health: Link UP Sep 6 00:23:37.152140 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:23:37.152158 systemd-networkd[1007]: lxc_health: Gained carrier Sep 6 00:23:37.310999 kubelet[1902]: E0906 00:23:37.310940 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:37.596688 systemd[1]: run-containerd-runc-k8s.io-04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4-runc.WNAb5o.mount: Deactivated successfully. Sep 6 00:23:37.954871 kubelet[1902]: E0906 00:23:37.954713 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:38.723301 systemd-networkd[1007]: lxc_health: Gained IPv6LL Sep 6 00:23:38.956978 kubelet[1902]: E0906 00:23:38.956936 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 6 00:23:39.364406 kubelet[1902]: W0906 00:23:39.364354 1902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice/cri-containerd-3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c.scope WatchSource:0}: task 3293ce6062b751eda49284bcdebfb3f3674ffba7238d56cb08a3785c91c6b47c not found Sep 6 00:23:39.839971 systemd[1]: run-containerd-runc-k8s.io-04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4-runc.Y41547.mount: Deactivated successfully. Sep 6 00:23:41.413366 env[1191]: time="2025-09-06T00:23:41.413138227Z" level=info msg="StopPodSandbox for \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\"" Sep 6 00:23:41.413366 env[1191]: time="2025-09-06T00:23:41.413239222Z" level=info msg="TearDown network for sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" successfully" Sep 6 00:23:41.413366 env[1191]: time="2025-09-06T00:23:41.413274311Z" level=info msg="StopPodSandbox for \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" returns successfully" Sep 6 00:23:41.416564 env[1191]: time="2025-09-06T00:23:41.415793687Z" level=info msg="RemovePodSandbox for \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\"" Sep 6 00:23:41.416564 env[1191]: time="2025-09-06T00:23:41.415837099Z" level=info msg="Forcibly stopping sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\"" Sep 6 00:23:41.416564 env[1191]: time="2025-09-06T00:23:41.415930745Z" level=info msg="TearDown network for sandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" successfully" Sep 6 00:23:41.419292 env[1191]: time="2025-09-06T00:23:41.419223940Z" level=info msg="RemovePodSandbox \"c52c64e70f2578ccfa41bcfc67f5f261e5968db8e76c920d134c24c396d7ab95\" returns successfully" Sep 6 00:23:41.420156 env[1191]: time="2025-09-06T00:23:41.420121827Z" level=info msg="StopPodSandbox for \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\"" Sep 6 00:23:41.420437 env[1191]: time="2025-09-06T00:23:41.420386215Z" level=info msg="TearDown network for sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" successfully" Sep 6 00:23:41.420535 env[1191]: time="2025-09-06T00:23:41.420511894Z" level=info msg="StopPodSandbox for \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" returns successfully" Sep 6 00:23:41.421149 env[1191]: time="2025-09-06T00:23:41.421093130Z" level=info msg="RemovePodSandbox for \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\"" Sep 6 00:23:41.421265 env[1191]: time="2025-09-06T00:23:41.421157536Z" level=info msg="Forcibly stopping sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\"" Sep 6 00:23:41.421320 env[1191]: time="2025-09-06T00:23:41.421270754Z" level=info msg="TearDown network for sandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" successfully" Sep 6 00:23:41.424564 env[1191]: time="2025-09-06T00:23:41.424506708Z" level=info msg="RemovePodSandbox \"086c795788d634eabd334c63628b706ad5ad16c362bf92332116cc41a217024f\" returns successfully" Sep 6 00:23:41.425368 env[1191]: time="2025-09-06T00:23:41.425329401Z" level=info msg="StopPodSandbox for \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\"" Sep 6 00:23:41.425720 env[1191]: time="2025-09-06T00:23:41.425649259Z" level=info msg="TearDown network for sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" successfully" Sep 6 00:23:41.425872 env[1191]: time="2025-09-06T00:23:41.425836700Z" level=info msg="StopPodSandbox for \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" returns successfully" Sep 6 00:23:41.426542 env[1191]: time="2025-09-06T00:23:41.426511203Z" level=info msg="RemovePodSandbox for \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\"" Sep 6 00:23:41.426730 env[1191]: time="2025-09-06T00:23:41.426685063Z" level=info msg="Forcibly stopping sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\"" Sep 6 00:23:41.426922 env[1191]: time="2025-09-06T00:23:41.426900061Z" level=info msg="TearDown network for sandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" successfully" Sep 6 00:23:41.429980 env[1191]: time="2025-09-06T00:23:41.429848177Z" level=info msg="RemovePodSandbox \"0cee658c7c3882ad2836a32b26752b0391c2e059bb45508a8e440a147349cb47\" returns successfully" Sep 6 00:23:42.036452 systemd[1]: run-containerd-runc-k8s.io-04ad64fa29001f18c46fa8299866a679c5d7b594553924a0e69e032ebf6190b4-runc.7UfuXJ.mount: Deactivated successfully. Sep 6 00:23:42.137713 sshd[3685]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:42.141363 systemd[1]: sshd@26-143.198.64.97:22-147.75.109.163:54918.service: Deactivated successfully. Sep 6 00:23:42.142323 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:23:42.143616 systemd-logind[1179]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:23:42.144556 systemd-logind[1179]: Removed session 27. Sep 6 00:23:42.473528 kubelet[1902]: W0906 00:23:42.473471 1902 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36e8a5b_a9b8_46ab_957c_6f4accca1437.slice/cri-containerd-0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a.scope WatchSource:0}: task 0a092f23cc75b76650a79a6e1ebe742ae4d5099b2b1053a798ac6ebb9933dd6a not found