Sep 6 00:16:02.080613 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:16:02.080663 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:02.080683 kernel: BIOS-provided physical RAM map: Sep 6 00:16:02.080692 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:16:02.080700 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:16:02.080710 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:16:02.080720 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 6 00:16:02.080727 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 6 00:16:02.080736 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:16:02.080743 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:16:02.080750 kernel: NX (Execute Disable) protection: active Sep 6 00:16:02.080757 kernel: SMBIOS 2.8 present. Sep 6 00:16:02.080763 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 6 00:16:02.080770 kernel: Hypervisor detected: KVM Sep 6 00:16:02.080779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:16:02.080789 kernel: kvm-clock: cpu 0, msr 2b19f001, primary cpu clock Sep 6 00:16:02.080796 kernel: kvm-clock: using sched offset of 3581701314 cycles Sep 6 00:16:02.080804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:16:02.080816 kernel: tsc: Detected 2494.140 MHz processor Sep 6 00:16:02.080824 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:16:02.080832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:16:02.080839 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 6 00:16:02.080847 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:16:02.080857 kernel: ACPI: Early table checksum verification disabled Sep 6 00:16:02.080867 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 6 00:16:02.080877 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080887 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080897 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080907 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 6 00:16:02.080917 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080928 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080938 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080951 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:16:02.080963 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 6 00:16:02.080973 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 6 00:16:02.080983 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 6 00:16:02.080995 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 6 00:16:02.081006 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 6 00:16:02.081018 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 6 00:16:02.081029 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 6 00:16:02.081048 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:16:02.081059 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:16:02.081072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 6 00:16:02.081083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 6 00:16:02.081096 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 6 00:16:02.081109 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 6 00:16:02.081124 kernel: Zone ranges: Sep 6 00:16:02.081135 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:16:02.081146 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 6 00:16:02.081157 kernel: Normal empty Sep 6 00:16:02.081167 kernel: Movable zone start for each node Sep 6 00:16:02.081179 kernel: Early memory node ranges Sep 6 00:16:02.081190 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:16:02.081201 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 6 00:16:02.081212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 6 00:16:02.081227 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:16:02.081244 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:16:02.081256 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 6 00:16:02.081266 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:16:02.081277 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:16:02.081288 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:16:02.081298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:16:02.081309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:16:02.081321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:16:02.081335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:16:02.081402 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:16:02.081418 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:16:02.081428 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:16:02.081442 kernel: TSC deadline timer available Sep 6 00:16:02.081452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:16:02.081464 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 6 00:16:02.081477 kernel: Booting paravirtualized kernel on KVM Sep 6 00:16:02.081499 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:16:02.081521 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:16:02.081532 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:16:02.081544 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:16:02.081555 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:16:02.081565 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 6 00:16:02.081577 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 6 00:16:02.081588 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 6 00:16:02.081599 kernel: Policy zone: DMA32 Sep 6 00:16:02.081612 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:02.081630 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:16:02.081641 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:16:02.081653 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:16:02.081664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:16:02.081676 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 6 00:16:02.081688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:16:02.081699 kernel: Kernel/User page tables isolation: enabled Sep 6 00:16:02.081712 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:16:02.081729 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:16:02.081741 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:16:02.081753 kernel: rcu: RCU event tracing is enabled. Sep 6 00:16:02.081764 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:16:02.081775 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:16:02.081785 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:16:02.081796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:16:02.081809 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:16:02.081823 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:16:02.081840 kernel: random: crng init done Sep 6 00:16:02.081852 kernel: Console: colour VGA+ 80x25 Sep 6 00:16:02.081864 kernel: printk: console [tty0] enabled Sep 6 00:16:02.081877 kernel: printk: console [ttyS0] enabled Sep 6 00:16:02.081890 kernel: ACPI: Core revision 20210730 Sep 6 00:16:02.081901 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:16:02.081913 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:16:02.081925 kernel: x2apic enabled Sep 6 00:16:02.081937 kernel: Switched APIC routing to physical x2apic. Sep 6 00:16:02.081949 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:16:02.081970 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:16:02.081982 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 6 00:16:02.082003 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 6 00:16:02.082017 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 6 00:16:02.082031 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:16:02.082044 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:16:02.082055 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:16:02.082067 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 6 00:16:02.082084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:16:02.082107 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:16:02.082122 kernel: MDS: Mitigation: Clear CPU buffers Sep 6 00:16:02.082140 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:16:02.082153 kernel: active return thunk: its_return_thunk Sep 6 00:16:02.082165 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:16:02.082177 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:16:02.082188 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:16:02.082200 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:16:02.082214 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:16:02.082232 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:16:02.082244 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:16:02.082259 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:16:02.082272 kernel: LSM: Security Framework initializing Sep 6 00:16:02.082284 kernel: SELinux: Initializing. Sep 6 00:16:02.082297 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:16:02.082309 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:16:02.082325 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 6 00:16:02.082337 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 6 00:16:02.082349 kernel: signal: max sigframe size: 1776 Sep 6 00:16:02.082361 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:16:02.082374 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:16:02.084461 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:16:02.084489 kernel: x86: Booting SMP configuration: Sep 6 00:16:02.084503 kernel: .... node #0, CPUs: #1 Sep 6 00:16:02.084515 kernel: kvm-clock: cpu 1, msr 2b19f041, secondary cpu clock Sep 6 00:16:02.084535 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 6 00:16:02.084547 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:16:02.084560 kernel: smpboot: Max logical packages: 1 Sep 6 00:16:02.084572 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 6 00:16:02.084585 kernel: devtmpfs: initialized Sep 6 00:16:02.084598 kernel: x86/mm: Memory block size: 128MB Sep 6 00:16:02.084611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:16:02.084625 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:16:02.084636 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:16:02.084654 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:16:02.084666 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:16:02.084678 kernel: audit: type=2000 audit(1757117761.565:1): state=initialized audit_enabled=0 res=1 Sep 6 00:16:02.084691 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:16:02.084702 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:16:02.084714 kernel: cpuidle: using governor menu Sep 6 00:16:02.084735 kernel: ACPI: bus type PCI registered Sep 6 00:16:02.084747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:16:02.084758 kernel: dca service started, version 1.12.1 Sep 6 00:16:02.084775 kernel: PCI: Using configuration type 1 for base access Sep 6 00:16:02.084787 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:16:02.084799 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:16:02.084811 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:16:02.084823 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:16:02.084835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:16:02.084846 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:16:02.084860 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:16:02.084873 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:16:02.084889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:16:02.084902 kernel: ACPI: Interpreter enabled Sep 6 00:16:02.084914 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:16:02.084927 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:16:02.084941 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:16:02.084954 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:16:02.084968 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:16:02.085292 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:16:02.085626 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:16:02.085652 kernel: acpiphp: Slot [3] registered Sep 6 00:16:02.085664 kernel: acpiphp: Slot [4] registered Sep 6 00:16:02.085677 kernel: acpiphp: Slot [5] registered Sep 6 00:16:02.085688 kernel: acpiphp: Slot [6] registered Sep 6 00:16:02.085700 kernel: acpiphp: Slot [7] registered Sep 6 00:16:02.085711 kernel: acpiphp: Slot [8] registered Sep 6 00:16:02.085723 kernel: acpiphp: Slot [9] registered Sep 6 00:16:02.085735 kernel: acpiphp: Slot [10] registered Sep 6 00:16:02.085755 kernel: acpiphp: Slot [11] registered Sep 6 00:16:02.085766 kernel: acpiphp: Slot [12] registered Sep 6 00:16:02.085778 kernel: acpiphp: Slot [13] registered Sep 6 00:16:02.085791 kernel: acpiphp: Slot [14] registered Sep 6 00:16:02.085803 kernel: acpiphp: Slot [15] registered Sep 6 00:16:02.085815 kernel: acpiphp: Slot [16] registered Sep 6 00:16:02.085827 kernel: acpiphp: Slot [17] registered Sep 6 00:16:02.085839 kernel: acpiphp: Slot [18] registered Sep 6 00:16:02.085850 kernel: acpiphp: Slot [19] registered Sep 6 00:16:02.085867 kernel: acpiphp: Slot [20] registered Sep 6 00:16:02.085879 kernel: acpiphp: Slot [21] registered Sep 6 00:16:02.085890 kernel: acpiphp: Slot [22] registered Sep 6 00:16:02.085903 kernel: acpiphp: Slot [23] registered Sep 6 00:16:02.085917 kernel: acpiphp: Slot [24] registered Sep 6 00:16:02.085929 kernel: acpiphp: Slot [25] registered Sep 6 00:16:02.085941 kernel: acpiphp: Slot [26] registered Sep 6 00:16:02.085953 kernel: acpiphp: Slot [27] registered Sep 6 00:16:02.085965 kernel: acpiphp: Slot [28] registered Sep 6 00:16:02.085979 kernel: acpiphp: Slot [29] registered Sep 6 00:16:02.085996 kernel: acpiphp: Slot [30] registered Sep 6 00:16:02.086008 kernel: acpiphp: Slot [31] registered Sep 6 00:16:02.086019 kernel: PCI host bridge to bus 0000:00 Sep 6 00:16:02.086209 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:16:02.087493 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:16:02.087689 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:16:02.087834 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:16:02.087979 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 6 00:16:02.088118 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:16:02.088319 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:16:02.090626 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:16:02.090833 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 6 00:16:02.090983 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 6 00:16:02.091138 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 6 00:16:02.091291 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 6 00:16:02.091496 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 6 00:16:02.091622 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 6 00:16:02.091780 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 6 00:16:02.091915 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 6 00:16:02.092022 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 6 00:16:02.092124 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 6 00:16:02.092225 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 6 00:16:02.092376 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 6 00:16:02.092497 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 6 00:16:02.092597 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 6 00:16:02.092691 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 6 00:16:02.092806 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 6 00:16:02.092946 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:16:02.093068 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:16:02.093161 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 6 00:16:02.093263 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 6 00:16:02.093435 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 6 00:16:02.093602 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:16:02.093707 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 6 00:16:02.093799 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 6 00:16:02.093888 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 6 00:16:02.094022 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 6 00:16:02.094157 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 6 00:16:02.094259 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 6 00:16:02.094407 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 6 00:16:02.094546 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:16:02.094693 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:16:02.094827 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 6 00:16:02.094965 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 6 00:16:02.095099 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:16:02.095192 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 6 00:16:02.095280 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 6 00:16:02.099513 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 6 00:16:02.099702 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 6 00:16:02.099811 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 6 00:16:02.099906 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 6 00:16:02.099917 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:16:02.099927 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:16:02.099935 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:16:02.099948 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:16:02.099957 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:16:02.099965 kernel: iommu: Default domain type: Translated Sep 6 00:16:02.099974 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:16:02.100067 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 6 00:16:02.100160 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:16:02.100252 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 6 00:16:02.100263 kernel: vgaarb: loaded Sep 6 00:16:02.100272 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:16:02.100284 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:16:02.100292 kernel: PTP clock support registered Sep 6 00:16:02.100301 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:16:02.100309 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:16:02.100318 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:16:02.100327 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 6 00:16:02.100335 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:16:02.100344 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:16:02.100353 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:16:02.100364 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:16:02.100373 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:16:02.102478 kernel: pnp: PnP ACPI init Sep 6 00:16:02.102500 kernel: pnp: PnP ACPI: found 4 devices Sep 6 00:16:02.102510 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:16:02.102519 kernel: NET: Registered PF_INET protocol family Sep 6 00:16:02.102528 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:16:02.102537 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:16:02.102551 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:16:02.102560 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:16:02.102568 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:16:02.102577 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:16:02.102586 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:16:02.102595 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:16:02.102604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:16:02.102613 kernel: NET: Registered PF_XDP protocol family Sep 6 00:16:02.102770 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:16:02.102860 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:16:02.102942 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:16:02.103048 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:16:02.103131 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 6 00:16:02.103233 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 6 00:16:02.103330 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:16:02.103472 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:16:02.103485 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:16:02.103582 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 37252 usecs Sep 6 00:16:02.103593 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:16:02.103603 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:16:02.103612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 6 00:16:02.103620 kernel: Initialise system trusted keyrings Sep 6 00:16:02.103629 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:16:02.103638 kernel: Key type asymmetric registered Sep 6 00:16:02.103646 kernel: Asymmetric key parser 'x509' registered Sep 6 00:16:02.103655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:16:02.103667 kernel: io scheduler mq-deadline registered Sep 6 00:16:02.103676 kernel: io scheduler kyber registered Sep 6 00:16:02.103684 kernel: io scheduler bfq registered Sep 6 00:16:02.103693 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:16:02.103702 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 6 00:16:02.103711 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 6 00:16:02.103719 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 6 00:16:02.103728 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:16:02.103736 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:16:02.103748 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:16:02.103756 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:16:02.103765 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:16:02.103882 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 6 00:16:02.103896 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:16:02.103979 kernel: rtc_cmos 00:03: registered as rtc0 Sep 6 00:16:02.104076 kernel: rtc_cmos 00:03: setting system clock to 2025-09-06T00:16:01 UTC (1757117761) Sep 6 00:16:02.104162 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 6 00:16:02.104177 kernel: intel_pstate: CPU model not supported Sep 6 00:16:02.104186 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:16:02.104194 kernel: Segment Routing with IPv6 Sep 6 00:16:02.104203 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:16:02.104211 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:16:02.104220 kernel: Key type dns_resolver registered Sep 6 00:16:02.104228 kernel: IPI shorthand broadcast: enabled Sep 6 00:16:02.104237 kernel: sched_clock: Marking stable (670524169, 83830619)->(891192196, -136837408) Sep 6 00:16:02.104245 kernel: registered taskstats version 1 Sep 6 00:16:02.104257 kernel: Loading compiled-in X.509 certificates Sep 6 00:16:02.104266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:16:02.104274 kernel: Key type .fscrypt registered Sep 6 00:16:02.104283 kernel: Key type fscrypt-provisioning registered Sep 6 00:16:02.104292 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:16:02.104300 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:16:02.104309 kernel: ima: No architecture policies found Sep 6 00:16:02.104317 kernel: clk: Disabling unused clocks Sep 6 00:16:02.104329 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:16:02.104337 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:16:02.104346 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:16:02.104355 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:16:02.104364 kernel: Run /init as init process Sep 6 00:16:02.104373 kernel: with arguments: Sep 6 00:16:02.104415 kernel: /init Sep 6 00:16:02.104427 kernel: with environment: Sep 6 00:16:02.104436 kernel: HOME=/ Sep 6 00:16:02.104447 kernel: TERM=linux Sep 6 00:16:02.104455 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:16:02.104469 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:16:02.104481 systemd[1]: Detected virtualization kvm. Sep 6 00:16:02.104490 systemd[1]: Detected architecture x86-64. Sep 6 00:16:02.104499 systemd[1]: Running in initrd. Sep 6 00:16:02.104508 systemd[1]: No hostname configured, using default hostname. Sep 6 00:16:02.104518 systemd[1]: Hostname set to . Sep 6 00:16:02.104530 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:16:02.104540 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:16:02.104549 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:16:02.104558 systemd[1]: Reached target cryptsetup.target. Sep 6 00:16:02.104567 systemd[1]: Reached target paths.target. Sep 6 00:16:02.104576 systemd[1]: Reached target slices.target. Sep 6 00:16:02.104586 systemd[1]: Reached target swap.target. Sep 6 00:16:02.104595 systemd[1]: Reached target timers.target. Sep 6 00:16:02.104608 systemd[1]: Listening on iscsid.socket. Sep 6 00:16:02.104618 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:16:02.104627 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:16:02.104636 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:16:02.104645 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:16:02.104655 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:16:02.104664 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:16:02.104673 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:16:02.104687 systemd[1]: Reached target sockets.target. Sep 6 00:16:02.104697 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:16:02.104709 systemd[1]: Finished network-cleanup.service. Sep 6 00:16:02.104718 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:16:02.104727 systemd[1]: Starting systemd-journald.service... Sep 6 00:16:02.104739 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:16:02.104749 systemd[1]: Starting systemd-resolved.service... Sep 6 00:16:02.104758 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:16:02.104767 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:16:02.104777 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:16:02.104786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:16:02.104804 systemd-journald[184]: Journal started Sep 6 00:16:02.104866 systemd-journald[184]: Runtime Journal (/run/log/journal/121ba7c8a6a445909fe766b7cd376bf0) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:16:02.081536 systemd-modules-load[185]: Inserted module 'overlay' Sep 6 00:16:02.129121 systemd[1]: Started systemd-journald.service. Sep 6 00:16:02.129163 kernel: audit: type=1130 audit(1757117762.125:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.111313 systemd-resolved[186]: Positive Trust Anchors: Sep 6 00:16:02.111326 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:16:02.111359 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:16:02.140122 kernel: audit: type=1130 audit(1757117762.132:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.140150 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:16:02.140164 kernel: audit: type=1130 audit(1757117762.133:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.114443 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 6 00:16:02.132648 systemd[1]: Started systemd-resolved.service. Sep 6 00:16:02.133186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:16:02.139686 systemd[1]: Reached target nss-lookup.target. Sep 6 00:16:02.146569 kernel: Bridge firewalling registered Sep 6 00:16:02.144022 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:16:02.146338 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:16:02.146613 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 6 00:16:02.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.151524 kernel: audit: type=1130 audit(1757117762.144:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.169421 kernel: SCSI subsystem initialized Sep 6 00:16:02.170142 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:16:02.171669 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:16:02.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.175516 kernel: audit: type=1130 audit(1757117762.170:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.186372 dracut-cmdline[202]: dracut-dracut-053 Sep 6 00:16:02.190100 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:16:02.191911 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:16:02.191938 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:16:02.191950 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:16:02.194494 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 6 00:16:02.195800 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:16:02.213097 kernel: audit: type=1130 audit(1757117762.195:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.197180 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:16:02.216066 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:16:02.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.220458 kernel: audit: type=1130 audit(1757117762.216:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.283428 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:16:02.303413 kernel: iscsi: registered transport (tcp) Sep 6 00:16:02.328420 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:16:02.328488 kernel: QLogic iSCSI HBA Driver Sep 6 00:16:02.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.380835 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:16:02.384659 kernel: audit: type=1130 audit(1757117762.380:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.382946 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:16:02.442457 kernel: raid6: avx2x4 gen() 15546 MB/s Sep 6 00:16:02.459457 kernel: raid6: avx2x4 xor() 6128 MB/s Sep 6 00:16:02.476449 kernel: raid6: avx2x2 gen() 15304 MB/s Sep 6 00:16:02.493461 kernel: raid6: avx2x2 xor() 19002 MB/s Sep 6 00:16:02.510443 kernel: raid6: avx2x1 gen() 12634 MB/s Sep 6 00:16:02.527462 kernel: raid6: avx2x1 xor() 16868 MB/s Sep 6 00:16:02.544453 kernel: raid6: sse2x4 gen() 12082 MB/s Sep 6 00:16:02.561454 kernel: raid6: sse2x4 xor() 6426 MB/s Sep 6 00:16:02.578456 kernel: raid6: sse2x2 gen() 12906 MB/s Sep 6 00:16:02.595458 kernel: raid6: sse2x2 xor() 8580 MB/s Sep 6 00:16:02.612443 kernel: raid6: sse2x1 gen() 11701 MB/s Sep 6 00:16:02.629649 kernel: raid6: sse2x1 xor() 6005 MB/s Sep 6 00:16:02.629753 kernel: raid6: using algorithm avx2x4 gen() 15546 MB/s Sep 6 00:16:02.629778 kernel: raid6: .... xor() 6128 MB/s, rmw enabled Sep 6 00:16:02.630832 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:16:02.644418 kernel: xor: automatically using best checksumming function avx Sep 6 00:16:02.747433 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:16:02.760784 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:16:02.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.763000 audit: BPF prog-id=7 op=LOAD Sep 6 00:16:02.763000 audit: BPF prog-id=8 op=LOAD Sep 6 00:16:02.766452 kernel: audit: type=1130 audit(1757117762.760:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.764300 systemd[1]: Starting systemd-udevd.service... Sep 6 00:16:02.778968 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 6 00:16:02.796596 systemd[1]: Started systemd-udevd.service. Sep 6 00:16:02.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.798531 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:16:02.813499 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Sep 6 00:16:02.851399 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:16:02.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.853298 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:16:02.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:02.902929 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:16:02.956421 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 6 00:16:02.989924 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:16:02.989943 kernel: GPT:9289727 != 125829119 Sep 6 00:16:02.989962 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:16:02.989973 kernel: GPT:9289727 != 125829119 Sep 6 00:16:02.989984 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:16:02.989995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:16:02.990006 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:16:02.990017 kernel: scsi host0: Virtio SCSI HBA Sep 6 00:16:02.996538 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 6 00:16:03.018668 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:16:03.018699 kernel: AES CTR mode by8 optimization enabled Sep 6 00:16:03.042409 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (432) Sep 6 00:16:03.048769 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:16:03.056367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:16:03.057437 kernel: libata version 3.00 loaded. Sep 6 00:16:03.058479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:16:03.061493 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 6 00:16:03.072041 kernel: scsi host1: ata_piix Sep 6 00:16:03.072192 kernel: scsi host2: ata_piix Sep 6 00:16:03.072324 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 6 00:16:03.072342 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 6 00:16:03.063174 systemd[1]: Starting disk-uuid.service... Sep 6 00:16:03.072864 disk-uuid[466]: Primary Header is updated. Sep 6 00:16:03.072864 disk-uuid[466]: Secondary Entries is updated. Sep 6 00:16:03.072864 disk-uuid[466]: Secondary Header is updated. Sep 6 00:16:03.077187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:16:03.087941 kernel: ACPI: bus type USB registered Sep 6 00:16:03.088013 kernel: usbcore: registered new interface driver usbfs Sep 6 00:16:03.088027 kernel: usbcore: registered new interface driver hub Sep 6 00:16:03.090534 kernel: usbcore: registered new device driver usb Sep 6 00:16:03.098918 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:16:03.248404 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 6 00:16:03.253457 kernel: ehci-pci: EHCI PCI platform driver Sep 6 00:16:03.259408 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 6 00:16:03.280262 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 6 00:16:03.283579 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 6 00:16:03.283718 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 6 00:16:03.283858 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 6 00:16:03.284009 kernel: hub 1-0:1.0: USB hub found Sep 6 00:16:03.284145 kernel: hub 1-0:1.0: 2 ports detected Sep 6 00:16:04.086221 disk-uuid[467]: The operation has completed successfully. Sep 6 00:16:04.087125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:16:04.143340 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:16:04.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.143555 systemd[1]: Finished disk-uuid.service. Sep 6 00:16:04.145281 systemd[1]: Starting verity-setup.service... Sep 6 00:16:04.168424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:16:04.223907 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:16:04.226130 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:16:04.228695 systemd[1]: Finished verity-setup.service. Sep 6 00:16:04.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.321432 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:16:04.321928 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:16:04.322446 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:16:04.323522 systemd[1]: Starting ignition-setup.service... Sep 6 00:16:04.324901 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:16:04.347083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:16:04.347162 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:16:04.347178 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:16:04.368228 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:16:04.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.376107 systemd[1]: Finished ignition-setup.service. Sep 6 00:16:04.378028 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:16:04.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.477000 audit: BPF prog-id=9 op=LOAD Sep 6 00:16:04.476569 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:16:04.479099 systemd[1]: Starting systemd-networkd.service... Sep 6 00:16:04.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.527249 systemd-networkd[689]: lo: Link UP Sep 6 00:16:04.527261 systemd-networkd[689]: lo: Gained carrier Sep 6 00:16:04.527874 systemd-networkd[689]: Enumeration completed Sep 6 00:16:04.528215 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:16:04.532074 ignition[618]: Ignition 2.14.0 Sep 6 00:16:04.528321 systemd[1]: Started systemd-networkd.service. Sep 6 00:16:04.532093 ignition[618]: Stage: fetch-offline Sep 6 00:16:04.529248 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 6 00:16:04.532212 ignition[618]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:04.529328 systemd[1]: Reached target network.target. Sep 6 00:16:04.532257 ignition[618]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:04.530484 systemd-networkd[689]: eth1: Link UP Sep 6 00:16:04.530489 systemd-networkd[689]: eth1: Gained carrier Sep 6 00:16:04.534192 systemd[1]: Starting iscsiuio.service... Sep 6 00:16:04.536844 systemd-networkd[689]: eth0: Link UP Sep 6 00:16:04.536848 systemd-networkd[689]: eth0: Gained carrier Sep 6 00:16:04.543161 ignition[618]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:04.543298 ignition[618]: parsed url from cmdline: "" Sep 6 00:16:04.543302 ignition[618]: no config URL provided Sep 6 00:16:04.543308 ignition[618]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:16:04.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.545230 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:16:04.543316 ignition[618]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:16:04.546795 systemd[1]: Starting ignition-fetch.service... Sep 6 00:16:04.543323 ignition[618]: failed to fetch config: resource requires networking Sep 6 00:16:04.543618 ignition[618]: Ignition finished successfully Sep 6 00:16:04.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.563105 systemd[1]: Started iscsiuio.service. Sep 6 00:16:04.564692 systemd[1]: Starting iscsid.service... Sep 6 00:16:04.569635 ignition[693]: Ignition 2.14.0 Sep 6 00:16:04.572470 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:16:04.572470 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:16:04.572470 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:16:04.572470 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:16:04.572470 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:16:04.572470 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:16:04.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.570526 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Sep 6 00:16:04.569647 ignition[693]: Stage: fetch Sep 6 00:16:04.574957 systemd[1]: Started iscsid.service. Sep 6 00:16:04.569798 ignition[693]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:04.577186 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:16:04.569818 ignition[693]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:04.577327 systemd-networkd[689]: eth0: DHCPv4 address 159.223.206.243/20, gateway 159.223.192.1 acquired from 169.254.169.253 Sep 6 00:16:04.581264 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:04.581466 ignition[693]: parsed url from cmdline: "" Sep 6 00:16:04.581471 ignition[693]: no config URL provided Sep 6 00:16:04.581478 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:16:04.581488 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:16:04.581523 ignition[693]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 6 00:16:04.599080 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:16:04.599620 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:16:04.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.599926 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:16:04.600228 systemd[1]: Reached target remote-fs.target. Sep 6 00:16:04.601663 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:16:04.609908 ignition[693]: GET result: OK Sep 6 00:16:04.610199 ignition[693]: parsing config with SHA512: 02e694cad9bc3af2d675be819c86a45a07575d9dc08ad263cc3a832e3abce85601417fba0ddc5c49231431c0068fd34eb05cf3dc72b8fdff83744449e46a56d0 Sep 6 00:16:04.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.620646 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:16:04.626264 unknown[693]: fetched base config from "system" Sep 6 00:16:04.626284 unknown[693]: fetched base config from "system" Sep 6 00:16:04.627171 ignition[693]: fetch: fetch complete Sep 6 00:16:04.626294 unknown[693]: fetched user config from "digitalocean" Sep 6 00:16:04.627183 ignition[693]: fetch: fetch passed Sep 6 00:16:04.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.629054 systemd[1]: Finished ignition-fetch.service. Sep 6 00:16:04.627279 ignition[693]: Ignition finished successfully Sep 6 00:16:04.630505 systemd[1]: Starting ignition-kargs.service... Sep 6 00:16:04.644979 ignition[714]: Ignition 2.14.0 Sep 6 00:16:04.644994 ignition[714]: Stage: kargs Sep 6 00:16:04.645179 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:04.645208 ignition[714]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:04.647583 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:04.649864 ignition[714]: kargs: kargs passed Sep 6 00:16:04.649931 ignition[714]: Ignition finished successfully Sep 6 00:16:04.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.651432 systemd[1]: Finished ignition-kargs.service. Sep 6 00:16:04.652835 systemd[1]: Starting ignition-disks.service... Sep 6 00:16:04.664987 ignition[720]: Ignition 2.14.0 Sep 6 00:16:04.664998 ignition[720]: Stage: disks Sep 6 00:16:04.665132 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:04.665152 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:04.667119 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:04.668456 ignition[720]: disks: disks passed Sep 6 00:16:04.668519 ignition[720]: Ignition finished successfully Sep 6 00:16:04.669702 systemd[1]: Finished ignition-disks.service. Sep 6 00:16:04.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.670517 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:16:04.671263 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:16:04.671940 systemd[1]: Reached target local-fs.target. Sep 6 00:16:04.672720 systemd[1]: Reached target sysinit.target. Sep 6 00:16:04.673457 systemd[1]: Reached target basic.target. Sep 6 00:16:04.675732 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:16:04.700463 systemd-fsck[728]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:16:04.704982 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:16:04.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.707061 systemd[1]: Mounting sysroot.mount... Sep 6 00:16:04.719408 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:16:04.720436 systemd[1]: Mounted sysroot.mount. Sep 6 00:16:04.721015 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:16:04.723344 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:16:04.725132 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 6 00:16:04.732096 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:16:04.732741 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:16:04.732797 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:16:04.735130 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:16:04.737589 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:16:04.754445 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:16:04.761374 initrd-setup-root[748]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:16:04.772537 initrd-setup-root[756]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:16:04.783371 initrd-setup-root[766]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:16:04.876823 coreos-metadata[735]: Sep 06 00:16:04.876 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:04.882118 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:16:04.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.884018 systemd[1]: Starting ignition-mount.service... Sep 6 00:16:04.885242 systemd[1]: Starting sysroot-boot.service... Sep 6 00:16:04.896550 coreos-metadata[735]: Sep 06 00:16:04.896 INFO Fetch successful Sep 6 00:16:04.902807 coreos-metadata[735]: Sep 06 00:16:04.902 INFO wrote hostname ci-3510.3.8-n-f21ba72e96 to /sysroot/etc/hostname Sep 6 00:16:04.903821 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:16:04.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.906016 coreos-metadata[734]: Sep 06 00:16:04.905 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:04.906807 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:16:04.920146 coreos-metadata[734]: Sep 06 00:16:04.920 INFO Fetch successful Sep 6 00:16:04.927454 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 6 00:16:04.927553 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 6 00:16:04.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.931522 systemd[1]: Finished sysroot-boot.service. Sep 6 00:16:04.933200 ignition[787]: INFO : Ignition 2.14.0 Sep 6 00:16:04.933200 ignition[787]: INFO : Stage: mount Sep 6 00:16:04.934503 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:04.934503 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:04.936101 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:04.938657 ignition[787]: INFO : mount: mount passed Sep 6 00:16:04.938657 ignition[787]: INFO : Ignition finished successfully Sep 6 00:16:04.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:04.938586 systemd[1]: Finished ignition-mount.service. Sep 6 00:16:05.245527 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:16:05.257480 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Sep 6 00:16:05.259979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:16:05.260083 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:16:05.260104 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:16:05.265908 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:16:05.267725 systemd[1]: Starting ignition-files.service... Sep 6 00:16:05.291983 ignition[816]: INFO : Ignition 2.14.0 Sep 6 00:16:05.291983 ignition[816]: INFO : Stage: files Sep 6 00:16:05.302472 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:05.302472 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:05.302472 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:05.302472 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:16:05.304875 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:16:05.304875 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:16:05.307308 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:16:05.308140 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:16:05.309263 unknown[816]: wrote ssh authorized keys file for user: core Sep 6 00:16:05.310167 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:16:05.310679 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:16:05.310679 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:16:05.453610 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:16:05.612808 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:16:05.613821 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:16:05.613821 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:16:05.696412 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:16:05.713808 systemd-networkd[689]: eth1: Gained IPv6LL Sep 6 00:16:05.838181 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:16:05.839086 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:16:05.840073 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:16:05.840877 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:16:05.841749 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:16:05.842513 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:16:05.842513 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:16:05.842513 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:16:05.842513 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:16:05.842513 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:16:05.845612 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:16:05.845612 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:05.845612 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:05.845612 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:05.845612 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:16:06.097808 systemd-networkd[689]: eth0: Gained IPv6LL Sep 6 00:16:06.295750 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:16:08.546181 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:16:08.546181 ignition[816]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:16:08.546181 ignition[816]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:16:08.546181 ignition[816]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:16:08.549000 ignition[816]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:16:08.556530 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:16:08.558481 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:16:08.558481 ignition[816]: INFO : files: files passed Sep 6 00:16:08.558481 ignition[816]: INFO : Ignition finished successfully Sep 6 00:16:08.569134 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:16:08.569180 kernel: audit: type=1130 audit(1757117768.559:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.559136 systemd[1]: Finished ignition-files.service. Sep 6 00:16:08.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.562759 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:16:08.576842 kernel: audit: type=1130 audit(1757117768.571:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.576876 kernel: audit: type=1131 audit(1757117768.571:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.564900 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:16:08.566493 systemd[1]: Starting ignition-quench.service... Sep 6 00:16:08.570943 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:16:08.571073 systemd[1]: Finished ignition-quench.service. Sep 6 00:16:08.579202 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:16:08.580144 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:16:08.581197 systemd[1]: Reached target ignition-complete.target. Sep 6 00:16:08.585335 kernel: audit: type=1130 audit(1757117768.580:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.586279 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:16:08.608942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:16:08.609112 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:16:08.615210 kernel: audit: type=1130 audit(1757117768.609:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.615243 kernel: audit: type=1131 audit(1757117768.609:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.613690 systemd[1]: Reached target initrd-fs.target. Sep 6 00:16:08.615801 systemd[1]: Reached target initrd.target. Sep 6 00:16:08.616369 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:16:08.618217 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:16:08.635066 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:16:08.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.640433 kernel: audit: type=1130 audit(1757117768.635:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.639748 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:16:08.651931 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:16:08.652918 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:16:08.653862 systemd[1]: Stopped target timers.target. Sep 6 00:16:08.654833 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:16:08.655515 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:16:08.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.658752 systemd[1]: Stopped target initrd.target. Sep 6 00:16:08.665554 kernel: audit: type=1131 audit(1757117768.655:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.659396 systemd[1]: Stopped target basic.target. Sep 6 00:16:08.665951 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:16:08.666485 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:16:08.667060 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:16:08.667684 systemd[1]: Stopped target remote-fs.target. Sep 6 00:16:08.668285 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:16:08.668872 systemd[1]: Stopped target sysinit.target. Sep 6 00:16:08.669565 systemd[1]: Stopped target local-fs.target. Sep 6 00:16:08.670179 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:16:08.670764 systemd[1]: Stopped target swap.target. Sep 6 00:16:08.671323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:16:08.674471 kernel: audit: type=1131 audit(1757117768.671:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.671497 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:16:08.671991 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:16:08.674970 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:16:08.678421 kernel: audit: type=1131 audit(1757117768.675:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.675150 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:16:08.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.676209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:16:08.676365 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:16:08.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.679142 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:16:08.679336 systemd[1]: Stopped ignition-files.service. Sep 6 00:16:08.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.680446 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:16:08.680640 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:16:08.683148 systemd[1]: Stopping ignition-mount.service... Sep 6 00:16:08.683959 systemd[1]: Stopping iscsiuio.service... Sep 6 00:16:08.693707 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:16:08.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.694044 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:16:08.696246 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:16:08.696770 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:16:08.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.697040 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:16:08.697911 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:16:08.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.698104 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:16:08.700546 ignition[854]: INFO : Ignition 2.14.0 Sep 6 00:16:08.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.703347 ignition[854]: INFO : Stage: umount Sep 6 00:16:08.703347 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:16:08.703347 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 6 00:16:08.702346 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:16:08.702560 systemd[1]: Stopped iscsiuio.service. Sep 6 00:16:08.704868 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:16:08.706479 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 6 00:16:08.707878 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:16:08.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.714327 ignition[854]: INFO : umount: umount passed Sep 6 00:16:08.715037 ignition[854]: INFO : Ignition finished successfully Sep 6 00:16:08.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.716524 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:16:08.716623 systemd[1]: Stopped ignition-mount.service. Sep 6 00:16:08.717087 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:16:08.717150 systemd[1]: Stopped ignition-disks.service. Sep 6 00:16:08.717617 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:16:08.717662 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:16:08.718260 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:16:08.718322 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:16:08.718766 systemd[1]: Stopped target network.target. Sep 6 00:16:08.723483 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:16:08.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.723583 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:16:08.725984 systemd[1]: Stopped target paths.target. Sep 6 00:16:08.726287 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:16:08.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.728462 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:16:08.728849 systemd[1]: Stopped target slices.target. Sep 6 00:16:08.729194 systemd[1]: Stopped target sockets.target. Sep 6 00:16:08.729625 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:16:08.729670 systemd[1]: Closed iscsid.socket. Sep 6 00:16:08.730050 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:16:08.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.730107 systemd[1]: Closed iscsiuio.socket. Sep 6 00:16:08.730479 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:16:08.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.730537 systemd[1]: Stopped ignition-setup.service. Sep 6 00:16:08.741000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:16:08.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.731336 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:16:08.731876 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:16:08.734518 systemd-networkd[689]: eth1: DHCPv6 lease lost Sep 6 00:16:08.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.734972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:16:08.736062 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:16:08.736249 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:16:08.738915 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:16:08.739056 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:16:08.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.739494 systemd-networkd[689]: eth0: DHCPv6 lease lost Sep 6 00:16:08.749000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:16:08.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.740712 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:16:08.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.740860 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:16:08.742259 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:16:08.742309 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:16:08.743100 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:16:08.743166 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:16:08.744990 systemd[1]: Stopping network-cleanup.service... Sep 6 00:16:08.747918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:16:08.748026 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:16:08.748930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:16:08.748990 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:16:08.749970 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:16:08.750035 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:16:08.753674 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:16:08.757705 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:16:08.761692 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:16:08.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.761852 systemd[1]: Stopped network-cleanup.service. Sep 6 00:16:08.766494 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:16:08.766704 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:16:08.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.767813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:16:08.767860 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:16:08.768792 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:16:08.768842 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:16:08.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.769594 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:16:08.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.769656 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:16:08.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.770315 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:16:08.770368 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:16:08.771030 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:16:08.771070 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:16:08.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.772727 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:16:08.773444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:16:08.773538 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:16:08.788720 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:16:08.789695 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:16:08.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:08.791034 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:16:08.792926 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:16:08.807209 systemd[1]: Switching root. Sep 6 00:16:08.830406 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 6 00:16:08.830516 iscsid[699]: iscsid shutting down. Sep 6 00:16:08.831353 systemd-journald[184]: Journal stopped Sep 6 00:16:12.273284 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:16:12.279819 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:16:12.279855 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:16:12.279873 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:16:12.279889 kernel: SELinux: policy capability open_perms=1 Sep 6 00:16:12.279901 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:16:12.279918 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:16:12.279931 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:16:12.279943 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:16:12.279954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:16:12.279973 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:16:12.279994 systemd[1]: Successfully loaded SELinux policy in 46.273ms. Sep 6 00:16:12.280028 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.224ms. Sep 6 00:16:12.280047 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:16:12.280067 systemd[1]: Detected virtualization kvm. Sep 6 00:16:12.280085 systemd[1]: Detected architecture x86-64. Sep 6 00:16:12.280108 systemd[1]: Detected first boot. Sep 6 00:16:12.280126 systemd[1]: Hostname set to . Sep 6 00:16:12.280149 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:16:12.280169 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:16:12.280187 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:16:12.280206 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:12.280229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:12.280245 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:12.280259 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:16:12.280275 systemd[1]: Stopped iscsid.service. Sep 6 00:16:12.280288 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:16:12.280301 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:16:12.280313 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:16:12.280325 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:16:12.280337 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:16:12.280351 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:16:12.280363 systemd[1]: Created slice system-getty.slice. Sep 6 00:16:12.280375 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:16:12.280404 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:16:12.280417 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:16:12.280435 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:16:12.280452 systemd[1]: Created slice user.slice. Sep 6 00:16:12.280465 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:16:12.280478 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:16:12.280490 systemd[1]: Set up automount boot.automount. Sep 6 00:16:12.280505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:16:12.280518 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:16:12.280531 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:16:12.280550 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:16:12.280569 systemd[1]: Reached target integritysetup.target. Sep 6 00:16:12.280587 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:16:12.280605 systemd[1]: Reached target remote-fs.target. Sep 6 00:16:12.280622 systemd[1]: Reached target slices.target. Sep 6 00:16:12.280645 systemd[1]: Reached target swap.target. Sep 6 00:16:12.280664 systemd[1]: Reached target torcx.target. Sep 6 00:16:12.280685 systemd[1]: Reached target veritysetup.target. Sep 6 00:16:12.280700 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:16:12.280713 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:16:12.280730 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:16:12.280751 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:16:12.280789 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:16:12.280802 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:16:12.280815 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:16:12.280831 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:16:12.280845 systemd[1]: Mounting media.mount... Sep 6 00:16:12.280858 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:12.280870 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:16:12.280882 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:16:12.280895 systemd[1]: Mounting tmp.mount... Sep 6 00:16:12.280908 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:16:12.280920 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:12.280932 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:16:12.280947 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:16:12.280959 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:12.280971 systemd[1]: Starting modprobe@drm.service... Sep 6 00:16:12.280983 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:12.280995 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:16:12.281007 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:12.281020 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:16:12.281033 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:16:12.281045 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:16:12.281059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:16:12.281071 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:16:12.281084 systemd[1]: Stopped systemd-journald.service. Sep 6 00:16:12.281096 systemd[1]: Starting systemd-journald.service... Sep 6 00:16:12.281109 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:16:12.281122 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:16:12.281134 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:16:12.281146 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:16:12.281159 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:16:12.281174 systemd[1]: Stopped verity-setup.service. Sep 6 00:16:12.281186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:12.281199 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:16:12.281211 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:16:12.281224 systemd[1]: Mounted media.mount. Sep 6 00:16:12.281236 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:16:12.281249 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:16:12.281265 systemd[1]: Mounted tmp.mount. Sep 6 00:16:12.281283 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:16:12.281305 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:16:12.281323 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:16:12.281357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:12.281375 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:12.281412 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:16:12.281434 systemd[1]: Finished modprobe@drm.service. Sep 6 00:16:12.281452 kernel: fuse: init (API version 7.34) Sep 6 00:16:12.281472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:12.281492 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:12.281511 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:16:12.281530 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:16:12.281546 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:16:12.281559 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:16:12.281571 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:16:12.281587 systemd[1]: Reached target network-pre.target. Sep 6 00:16:12.281607 systemd-journald[960]: Journal started Sep 6 00:16:12.281675 systemd-journald[960]: Runtime Journal (/run/log/journal/121ba7c8a6a445909fe766b7cd376bf0) is 4.9M, max 39.5M, 34.5M free. Sep 6 00:16:08.969000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:16:09.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:16:09.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:16:09.025000 audit: BPF prog-id=10 op=LOAD Sep 6 00:16:09.025000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:16:09.025000 audit: BPF prog-id=11 op=LOAD Sep 6 00:16:09.025000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:16:09.127000 audit[888]: AVC avc: denied { associate } for pid=888 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:16:09.127000 audit[888]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d88c a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=870 pid=888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:09.127000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:16:09.129000 audit[888]: AVC avc: denied { associate } for pid=888 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:16:09.129000 audit[888]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d965 a2=1ed a3=0 items=2 ppid=870 pid=888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:09.129000 audit: CWD cwd="/" Sep 6 00:16:09.129000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:09.129000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:12.288652 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:16:09.129000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:16:12.078000 audit: BPF prog-id=12 op=LOAD Sep 6 00:16:12.078000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:16:12.078000 audit: BPF prog-id=13 op=LOAD Sep 6 00:16:12.078000 audit: BPF prog-id=14 op=LOAD Sep 6 00:16:12.078000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:16:12.078000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:16:12.079000 audit: BPF prog-id=15 op=LOAD Sep 6 00:16:12.079000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:16:12.079000 audit: BPF prog-id=16 op=LOAD Sep 6 00:16:12.079000 audit: BPF prog-id=17 op=LOAD Sep 6 00:16:12.079000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:16:12.079000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:16:12.080000 audit: BPF prog-id=18 op=LOAD Sep 6 00:16:12.080000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:16:12.080000 audit: BPF prog-id=19 op=LOAD Sep 6 00:16:12.080000 audit: BPF prog-id=20 op=LOAD Sep 6 00:16:12.080000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:16:12.080000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:16:12.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.088000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:16:12.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.197000 audit: BPF prog-id=21 op=LOAD Sep 6 00:16:12.198000 audit: BPF prog-id=22 op=LOAD Sep 6 00:16:12.198000 audit: BPF prog-id=23 op=LOAD Sep 6 00:16:12.198000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:16:12.198000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:16:12.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.265000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:16:12.265000 audit[960]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff71eb3540 a2=4000 a3=7fff71eb35dc items=0 ppid=1 pid=960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:12.265000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:16:12.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:09.124838 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:12.075630 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:16:12.294095 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:16:12.294127 kernel: loop: module loaded Sep 6 00:16:12.294145 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:16:09.125332 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:16:12.075646 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:16:09.125460 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:16:12.081484 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:16:09.125499 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:16:09.125511 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:16:09.125550 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:16:09.125564 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:16:09.125785 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:16:09.125846 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:16:09.125862 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:16:09.127571 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:16:09.127611 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:16:09.127633 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:16:09.127649 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:16:09.127669 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:16:09.127683 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:16:11.607670 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:11.608121 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:11.608363 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:11.608716 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:16:11.608806 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:16:11.608921 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-09-06T00:16:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:16:12.303491 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:16:12.303570 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:12.311415 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:16:12.311518 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:16:12.313414 systemd[1]: Started systemd-journald.service. Sep 6 00:16:12.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.317656 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:12.317828 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:12.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.318376 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:16:12.318865 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:16:12.319413 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:16:12.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.320800 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:16:12.322843 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:16:12.323671 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:12.330166 systemd-journald[960]: Time spent on flushing to /var/log/journal/121ba7c8a6a445909fe766b7cd376bf0 is 42.717ms for 1154 entries. Sep 6 00:16:12.330166 systemd-journald[960]: System Journal (/var/log/journal/121ba7c8a6a445909fe766b7cd376bf0) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:16:12.383628 systemd-journald[960]: Received client request to flush runtime journal. Sep 6 00:16:12.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.345852 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:16:12.364156 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:16:12.366171 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:16:12.385305 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:16:12.396603 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:16:12.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.402828 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:16:12.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.404574 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:16:12.415135 udevadm[996]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:16:12.934126 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:16:12.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.934000 audit: BPF prog-id=24 op=LOAD Sep 6 00:16:12.934000 audit: BPF prog-id=25 op=LOAD Sep 6 00:16:12.934000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:16:12.934000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:16:12.936068 systemd[1]: Starting systemd-udevd.service... Sep 6 00:16:12.959139 systemd-udevd[997]: Using default interface naming scheme 'v252'. Sep 6 00:16:12.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:12.988000 audit: BPF prog-id=26 op=LOAD Sep 6 00:16:12.987410 systemd[1]: Started systemd-udevd.service. Sep 6 00:16:12.989957 systemd[1]: Starting systemd-networkd.service... Sep 6 00:16:13.001000 audit: BPF prog-id=27 op=LOAD Sep 6 00:16:13.001000 audit: BPF prog-id=28 op=LOAD Sep 6 00:16:13.001000 audit: BPF prog-id=29 op=LOAD Sep 6 00:16:13.003190 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:16:13.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.054369 systemd[1]: Started systemd-userdbd.service. Sep 6 00:16:13.080092 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:13.080306 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.081897 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:13.083741 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:13.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.086280 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:13.086740 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:16:13.086838 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:16:13.086952 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:13.087590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:13.089564 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:13.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.097301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:13.097511 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:13.098006 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:13.107937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:16:13.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.110999 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:13.111159 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:13.111736 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.137660 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:16:13.169307 systemd-networkd[1003]: lo: Link UP Sep 6 00:16:13.169832 systemd-networkd[1003]: lo: Gained carrier Sep 6 00:16:13.170495 systemd-networkd[1003]: Enumeration completed Sep 6 00:16:13.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.172277 systemd[1]: Started systemd-networkd.service. Sep 6 00:16:13.173000 systemd-networkd[1003]: eth1: Configuring with /run/systemd/network/10-ea:b2:ea:74:8d:a6.network. Sep 6 00:16:13.174366 systemd-networkd[1003]: eth0: Configuring with /run/systemd/network/10-ea:ba:a6:3a:e5:4a.network. Sep 6 00:16:13.175150 systemd-networkd[1003]: eth1: Link UP Sep 6 00:16:13.175258 systemd-networkd[1003]: eth1: Gained carrier Sep 6 00:16:13.180858 systemd-networkd[1003]: eth0: Link UP Sep 6 00:16:13.180869 systemd-networkd[1003]: eth0: Gained carrier Sep 6 00:16:13.195448 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:16:13.203451 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:16:13.205000 audit[1008]: AVC avc: denied { confidentiality } for pid=1008 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:16:13.205000 audit[1008]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b0fe98a010 a1=338ec a2=7fddeb049bc5 a3=5 items=110 ppid=997 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:13.205000 audit: CWD cwd="/" Sep 6 00:16:13.205000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=1 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=2 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=3 name=(null) inode=14255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=4 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=5 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=6 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=7 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=8 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=9 name=(null) inode=14258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=10 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=11 name=(null) inode=14259 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=12 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=13 name=(null) inode=14260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=14 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=15 name=(null) inode=14261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=16 name=(null) inode=14257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=17 name=(null) inode=14262 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=18 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=19 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=20 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=21 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=22 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=23 name=(null) inode=14265 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=24 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=25 name=(null) inode=14266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=26 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=27 name=(null) inode=14267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=28 name=(null) inode=14263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=29 name=(null) inode=14268 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=30 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=31 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=32 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=33 name=(null) inode=14270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=34 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=35 name=(null) inode=14271 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=36 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=37 name=(null) inode=14272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=38 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=39 name=(null) inode=14273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=40 name=(null) inode=14269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=41 name=(null) inode=14274 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=42 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=43 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=44 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=45 name=(null) inode=14276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=46 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=47 name=(null) inode=14277 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=48 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=49 name=(null) inode=14278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=50 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=51 name=(null) inode=14279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=52 name=(null) inode=14275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=53 name=(null) inode=14280 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=55 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=56 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=57 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=58 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=59 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=60 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=61 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=62 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=63 name=(null) inode=14285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=64 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=65 name=(null) inode=14286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=66 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=67 name=(null) inode=14287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=68 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=69 name=(null) inode=14288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=70 name=(null) inode=14284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=71 name=(null) inode=14289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=72 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=73 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=74 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=75 name=(null) inode=14291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=76 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=77 name=(null) inode=14292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=78 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=79 name=(null) inode=14293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=80 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=81 name=(null) inode=14294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=82 name=(null) inode=14290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=83 name=(null) inode=14295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=84 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=85 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=86 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=87 name=(null) inode=14297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=88 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=89 name=(null) inode=14298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=90 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=91 name=(null) inode=14299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=92 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=93 name=(null) inode=14300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=94 name=(null) inode=14296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=95 name=(null) inode=14301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=96 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=97 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=98 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=99 name=(null) inode=14303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=100 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=101 name=(null) inode=14304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=102 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=103 name=(null) inode=14305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=104 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=105 name=(null) inode=14306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=106 name=(null) inode=14302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=107 name=(null) inode=14307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PATH item=109 name=(null) inode=14309 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:16:13.205000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:16:13.255412 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:16:13.275468 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 6 00:16:13.279419 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:16:13.400494 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:16:13.426880 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:16:13.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.428878 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:16:13.455788 lvm[1035]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:16:13.488099 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:16:13.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.488848 systemd[1]: Reached target cryptsetup.target. Sep 6 00:16:13.491250 systemd[1]: Starting lvm2-activation.service... Sep 6 00:16:13.497270 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:16:13.520816 systemd[1]: Finished lvm2-activation.service. Sep 6 00:16:13.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.521464 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:16:13.523537 systemd[1]: Mounting media-configdrive.mount... Sep 6 00:16:13.523981 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:16:13.524033 systemd[1]: Reached target machines.target. Sep 6 00:16:13.525876 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:16:13.540846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:16:13.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.545434 kernel: ISO 9660 Extensions: RRIP_1991A Sep 6 00:16:13.547881 systemd[1]: Mounted media-configdrive.mount. Sep 6 00:16:13.548594 systemd[1]: Reached target local-fs.target. Sep 6 00:16:13.551009 systemd[1]: Starting ldconfig.service... Sep 6 00:16:13.552303 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.552396 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:13.555688 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:16:13.558898 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:16:13.567161 systemd[1]: Starting systemd-sysext.service... Sep 6 00:16:13.568604 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) Sep 6 00:16:13.571659 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:16:13.595068 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:16:13.609213 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:16:13.609606 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:16:13.642565 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:16:13.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.665474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:16:13.666347 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:16:13.667902 kernel: kauditd_printk_skb: 243 callbacks suppressed Sep 6 00:16:13.667979 kernel: audit: type=1130 audit(1757117773.666:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.702415 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:16:13.715191 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) Sep 6 00:16:13.715191 systemd-fsck[1049]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:16:13.721869 kernel: audit: type=1130 audit(1757117773.717:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.717579 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:16:13.719592 systemd[1]: Mounting boot.mount... Sep 6 00:16:13.729583 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:16:13.739198 systemd[1]: Mounted boot.mount. Sep 6 00:16:13.754550 (sd-sysext)[1053]: Using extensions 'kubernetes'. Sep 6 00:16:13.758699 (sd-sysext)[1053]: Merged extensions into '/usr'. Sep 6 00:16:13.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.762279 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:16:13.765536 kernel: audit: type=1130 audit(1757117773.762:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.786665 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:13.789543 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:16:13.790598 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.793478 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:13.796069 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:13.798778 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:13.799461 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.799773 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:13.800239 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:13.801301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:13.801491 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:13.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.803642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:13.803894 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:13.807427 kernel: audit: type=1130 audit(1757117773.802:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.807541 kernel: audit: type=1131 audit(1757117773.802:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.811451 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:16:13.812403 kernel: audit: type=1130 audit(1757117773.808:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.812458 kernel: audit: type=1131 audit(1757117773.808:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.816609 systemd[1]: Finished systemd-sysext.service. Sep 6 00:16:13.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.820445 kernel: audit: type=1130 audit(1757117773.816:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.817310 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:13.817612 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:13.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.823399 kernel: audit: type=1130 audit(1757117773.820:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.826534 kernel: audit: type=1131 audit(1757117773.820:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:13.823921 systemd[1]: Starting ensure-sysext.service... Sep 6 00:16:13.830238 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:13.830337 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:13.831975 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:16:13.836679 systemd[1]: Reloading. Sep 6 00:16:13.860011 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:16:13.873549 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:16:13.879266 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:16:13.987202 /usr/lib/systemd/system-generators/torcx-generator[1082]: time="2025-09-06T00:16:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:13.987232 /usr/lib/systemd/system-generators/torcx-generator[1082]: time="2025-09-06T00:16:13Z" level=info msg="torcx already run" Sep 6 00:16:14.033280 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:16:14.128866 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:14.128892 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:14.157709 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:14.233000 audit: BPF prog-id=30 op=LOAD Sep 6 00:16:14.233000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:16:14.233000 audit: BPF prog-id=31 op=LOAD Sep 6 00:16:14.233000 audit: BPF prog-id=32 op=LOAD Sep 6 00:16:14.233000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:16:14.233000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:16:14.234000 audit: BPF prog-id=33 op=LOAD Sep 6 00:16:14.234000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:16:14.234000 audit: BPF prog-id=34 op=LOAD Sep 6 00:16:14.234000 audit: BPF prog-id=35 op=LOAD Sep 6 00:16:14.234000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:16:14.234000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:16:14.236000 audit: BPF prog-id=36 op=LOAD Sep 6 00:16:14.236000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:16:14.236000 audit: BPF prog-id=37 op=LOAD Sep 6 00:16:14.236000 audit: BPF prog-id=38 op=LOAD Sep 6 00:16:14.236000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:16:14.236000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:16:14.239758 systemd[1]: Finished ldconfig.service. Sep 6 00:16:14.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.241786 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:16:14.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.246117 systemd[1]: Starting audit-rules.service... Sep 6 00:16:14.248115 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:16:14.253736 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:16:14.262000 audit: BPF prog-id=39 op=LOAD Sep 6 00:16:14.263862 systemd[1]: Starting systemd-resolved.service... Sep 6 00:16:14.264000 audit: BPF prog-id=40 op=LOAD Sep 6 00:16:14.266552 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:16:14.270127 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:16:14.271364 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:16:14.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.274321 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:14.279574 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.281160 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:16:14.283321 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:14.285845 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:14.288037 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.288215 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:14.288349 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:14.289460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:14.289633 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:14.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.290399 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:14.293738 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.294000 audit[1136]: SYSTEM_BOOT pid=1136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.295601 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:16:14.296079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.296243 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:14.296393 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:14.297296 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:14.297512 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:14.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.306735 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.308564 systemd[1]: Starting modprobe@drm.service... Sep 6 00:16:14.313397 systemd[1]: Starting modprobe@loop.service... Sep 6 00:16:14.314049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.314227 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:14.318695 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:16:14.319576 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:16:14.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.321472 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:16:14.322686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:16:14.323134 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:16:14.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.324831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:16:14.324970 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:16:14.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.329081 systemd[1]: Finished ensure-sysext.service. Sep 6 00:16:14.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.332137 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:16:14.336872 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:16:14.337018 systemd[1]: Finished modprobe@loop.service. Sep 6 00:16:14.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.337561 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.343241 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:16:14.343411 systemd[1]: Finished modprobe@drm.service. Sep 6 00:16:14.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.366856 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:16:14.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:16:14.368772 systemd[1]: Starting systemd-update-done.service... Sep 6 00:16:14.373000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:16:14.373000 audit[1154]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd26795ec0 a2=420 a3=0 items=0 ppid=1127 pid=1154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:16:14.373000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:16:14.374631 augenrules[1154]: No rules Sep 6 00:16:14.374744 systemd[1]: Finished audit-rules.service. Sep 6 00:16:14.380741 systemd[1]: Finished systemd-update-done.service. Sep 6 00:16:14.392284 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:16:14.392823 systemd[1]: Reached target time-set.target. Sep 6 00:16:14.414279 systemd-resolved[1133]: Positive Trust Anchors: Sep 6 00:16:14.414298 systemd-resolved[1133]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:16:14.414330 systemd-resolved[1133]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:16:14.420773 systemd-resolved[1133]: Using system hostname 'ci-3510.3.8-n-f21ba72e96'. Sep 6 00:16:14.423180 systemd[1]: Started systemd-resolved.service. Sep 6 00:16:14.423661 systemd[1]: Reached target network.target. Sep 6 00:16:14.423969 systemd[1]: Reached target nss-lookup.target. Sep 6 00:16:14.424261 systemd[1]: Reached target sysinit.target. Sep 6 00:16:14.424674 systemd[1]: Started motdgen.path. Sep 6 00:16:14.425029 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:16:14.425772 systemd[1]: Started logrotate.timer. Sep 6 00:16:14.426150 systemd[1]: Started mdadm.timer. Sep 6 00:16:14.426437 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:16:14.426784 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:16:14.426821 systemd[1]: Reached target paths.target. Sep 6 00:16:14.427107 systemd[1]: Reached target timers.target. Sep 6 00:16:14.427871 systemd[1]: Listening on dbus.socket. Sep 6 00:16:14.429757 systemd[1]: Starting docker.socket... Sep 6 00:16:14.433793 systemd[1]: Listening on sshd.socket. Sep 6 00:16:14.434300 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:14.434864 systemd[1]: Listening on docker.socket. Sep 6 00:16:14.435306 systemd[1]: Reached target sockets.target. Sep 6 00:16:14.435613 systemd[1]: Reached target basic.target. Sep 6 00:16:14.435935 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.435961 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:16:14.437241 systemd[1]: Starting containerd.service... Sep 6 00:16:15.297510 systemd-resolved[1133]: Clock change detected. Flushing caches. Sep 6 00:16:15.297780 systemd-timesyncd[1135]: Contacted time server 23.142.248.9:123 (0.flatcar.pool.ntp.org). Sep 6 00:16:15.297803 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:16:15.298217 systemd-timesyncd[1135]: Initial clock synchronization to Sat 2025-09-06 00:16:15.297442 UTC. Sep 6 00:16:15.300425 systemd[1]: Starting dbus.service... Sep 6 00:16:15.303333 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:16:15.308544 systemd[1]: Starting extend-filesystems.service... Sep 6 00:16:15.309041 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:16:15.312308 systemd[1]: Starting motdgen.service... Sep 6 00:16:15.318403 systemd[1]: Starting prepare-helm.service... Sep 6 00:16:15.321548 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:16:15.325305 systemd[1]: Starting sshd-keygen.service... Sep 6 00:16:15.331379 systemd[1]: Starting systemd-logind.service... Sep 6 00:16:15.331787 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:16:15.331868 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:16:15.334882 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:16:15.335802 systemd[1]: Starting update-engine.service... Sep 6 00:16:15.337930 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:16:15.345898 jq[1168]: false Sep 6 00:16:15.356516 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:16:15.356757 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:16:15.357254 extend-filesystems[1169]: Found loop1 Sep 6 00:16:15.359280 extend-filesystems[1169]: Found vda Sep 6 00:16:15.365600 jq[1179]: true Sep 6 00:16:15.368905 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:16:15.369166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:16:15.371054 extend-filesystems[1169]: Found vda1 Sep 6 00:16:15.371054 extend-filesystems[1169]: Found vda2 Sep 6 00:16:15.371054 extend-filesystems[1169]: Found vda3 Sep 6 00:16:15.371054 extend-filesystems[1169]: Found usr Sep 6 00:16:15.371054 extend-filesystems[1169]: Found vda4 Sep 6 00:16:15.371054 extend-filesystems[1169]: Found vda6 Sep 6 00:16:15.382272 extend-filesystems[1169]: Found vda7 Sep 6 00:16:15.382272 extend-filesystems[1169]: Found vda9 Sep 6 00:16:15.382272 extend-filesystems[1169]: Checking size of /dev/vda9 Sep 6 00:16:15.392063 tar[1181]: linux-amd64/helm Sep 6 00:16:15.392344 jq[1185]: true Sep 6 00:16:15.405127 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:15.405168 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:16:15.421559 dbus-daemon[1165]: [system] SELinux support is enabled Sep 6 00:16:15.421787 systemd[1]: Started dbus.service. Sep 6 00:16:15.424299 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:16:15.424346 systemd[1]: Reached target system-config.target. Sep 6 00:16:15.424768 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:16:15.424783 systemd[1]: Reached target user-config.target. Sep 6 00:16:15.440218 extend-filesystems[1169]: Resized partition /dev/vda9 Sep 6 00:16:15.453586 extend-filesystems[1207]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:16:15.459014 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 6 00:16:15.487386 update_engine[1178]: I0906 00:16:15.486876 1178 main.cc:92] Flatcar Update Engine starting Sep 6 00:16:15.489838 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:16:15.490034 systemd[1]: Finished motdgen.service. Sep 6 00:16:15.492266 systemd[1]: Started update-engine.service. Sep 6 00:16:15.492524 update_engine[1178]: I0906 00:16:15.492496 1178 update_check_scheduler.cc:74] Next update check in 4m22s Sep 6 00:16:15.494797 systemd[1]: Started locksmithd.service. Sep 6 00:16:15.542602 bash[1218]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:16:15.543443 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:16:15.552016 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 6 00:16:15.566064 env[1184]: time="2025-09-06T00:16:15.565996375Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:16:15.569587 extend-filesystems[1207]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:16:15.569587 extend-filesystems[1207]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 6 00:16:15.569587 extend-filesystems[1207]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 6 00:16:15.571504 extend-filesystems[1169]: Resized filesystem in /dev/vda9 Sep 6 00:16:15.571504 extend-filesystems[1169]: Found vdb Sep 6 00:16:15.570171 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:16:15.570354 systemd[1]: Finished extend-filesystems.service. Sep 6 00:16:15.576714 systemd-logind[1177]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:16:15.578094 systemd-logind[1177]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:16:15.578837 systemd[1]: Created slice system-sshd.slice. Sep 6 00:16:15.581284 systemd-logind[1177]: New seat seat0. Sep 6 00:16:15.583350 systemd[1]: Started systemd-logind.service. Sep 6 00:16:15.595156 systemd-networkd[1003]: eth0: Gained IPv6LL Sep 6 00:16:15.598362 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:16:15.599109 systemd[1]: Reached target network-online.target. Sep 6 00:16:15.601804 systemd[1]: Starting kubelet.service... Sep 6 00:16:15.618156 coreos-metadata[1164]: Sep 06 00:16:15.618 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:15.639163 coreos-metadata[1164]: Sep 06 00:16:15.639 INFO Fetch successful Sep 6 00:16:15.647053 unknown[1164]: wrote ssh authorized keys file for user: core Sep 6 00:16:15.659376 env[1184]: time="2025-09-06T00:16:15.659319999Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:16:15.659751 update-ssh-keys[1227]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:16:15.660312 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:16:15.661215 env[1184]: time="2025-09-06T00:16:15.661178990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664331704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664386944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664689723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664709918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664723692Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664735593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.664828752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.665125226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.665291632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:16:15.666673 env[1184]: time="2025-09-06T00:16:15.665314189Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:16:15.667118 env[1184]: time="2025-09-06T00:16:15.665386920Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:16:15.667118 env[1184]: time="2025-09-06T00:16:15.665402080Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673627657Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673684538Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673699315Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673750283Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673766726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673780076Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673792787Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673809420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673822967Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673835257Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673847222Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.673861616Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.674025400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:16:15.675386 env[1184]: time="2025-09-06T00:16:15.674219899Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674547069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674593668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674608001Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674679569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674696238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674709845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674771140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674783717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674806217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674818061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674828872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.674842794Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.675093073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.675113637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.675825 env[1184]: time="2025-09-06T00:16:15.675126795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.676192 env[1184]: time="2025-09-06T00:16:15.675153819Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:16:15.676192 env[1184]: time="2025-09-06T00:16:15.675171338Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:16:15.676192 env[1184]: time="2025-09-06T00:16:15.675185505Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:16:15.676192 env[1184]: time="2025-09-06T00:16:15.675206728Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:16:15.676192 env[1184]: time="2025-09-06T00:16:15.675254160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.676488252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.676576458Z" level=info msg="Connect containerd service" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.676619984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677615601Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677765346Z" level=info msg="Start subscribing containerd event" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677838142Z" level=info msg="Start recovering state" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677937975Z" level=info msg="Start event monitor" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677962370Z" level=info msg="Start snapshots syncer" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.677992846Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:16:15.678220 env[1184]: time="2025-09-06T00:16:15.678004419Z" level=info msg="Start streaming server" Sep 6 00:16:15.681559 env[1184]: time="2025-09-06T00:16:15.679504073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:16:15.681559 env[1184]: time="2025-09-06T00:16:15.679595029Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:16:15.681832 env[1184]: time="2025-09-06T00:16:15.681806355Z" level=info msg="containerd successfully booted in 0.120437s" Sep 6 00:16:15.681907 systemd[1]: Started containerd.service. Sep 6 00:16:15.723185 systemd-networkd[1003]: eth1: Gained IPv6LL Sep 6 00:16:16.352004 tar[1181]: linux-amd64/LICENSE Sep 6 00:16:16.352185 tar[1181]: linux-amd64/README.md Sep 6 00:16:16.360006 systemd[1]: Finished prepare-helm.service. Sep 6 00:16:16.388808 sshd_keygen[1192]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:16:16.422351 systemd[1]: Finished sshd-keygen.service. Sep 6 00:16:16.424835 systemd[1]: Starting issuegen.service... Sep 6 00:16:16.427640 systemd[1]: Started sshd@0-159.223.206.243:22-147.75.109.163:53736.service. Sep 6 00:16:16.440646 locksmithd[1214]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:16:16.447188 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:16:16.447463 systemd[1]: Finished issuegen.service. Sep 6 00:16:16.450026 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:16:16.467055 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:16:16.469558 systemd[1]: Started getty@tty1.service. Sep 6 00:16:16.472659 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:16:16.474388 systemd[1]: Reached target getty.target. Sep 6 00:16:16.513662 sshd[1242]: Accepted publickey for core from 147.75.109.163 port 53736 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:16.518527 sshd[1242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:16.533867 systemd[1]: Created slice user-500.slice. Sep 6 00:16:16.537451 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:16:16.543051 systemd-logind[1177]: New session 1 of user core. Sep 6 00:16:16.555277 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:16:16.557828 systemd[1]: Starting user@500.service... Sep 6 00:16:16.564714 (systemd)[1251]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:16.666564 systemd[1251]: Queued start job for default target default.target. Sep 6 00:16:16.667878 systemd[1251]: Reached target paths.target. Sep 6 00:16:16.667908 systemd[1251]: Reached target sockets.target. Sep 6 00:16:16.667921 systemd[1251]: Reached target timers.target. Sep 6 00:16:16.667935 systemd[1251]: Reached target basic.target. Sep 6 00:16:16.668092 systemd[1]: Started user@500.service. Sep 6 00:16:16.669550 systemd[1]: Started session-1.scope. Sep 6 00:16:16.674364 systemd[1251]: Reached target default.target. Sep 6 00:16:16.675047 systemd[1251]: Startup finished in 98ms. Sep 6 00:16:16.746625 systemd[1]: Started sshd@1-159.223.206.243:22-147.75.109.163:53744.service. Sep 6 00:16:16.823245 sshd[1260]: Accepted publickey for core from 147.75.109.163 port 53744 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:16.825775 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:16.836914 systemd[1]: Started session-2.scope. Sep 6 00:16:16.838395 systemd-logind[1177]: New session 2 of user core. Sep 6 00:16:16.912965 sshd[1260]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:16.922357 systemd[1]: Started sshd@2-159.223.206.243:22-147.75.109.163:53760.service. Sep 6 00:16:16.927548 systemd[1]: sshd@1-159.223.206.243:22-147.75.109.163:53744.service: Deactivated successfully. Sep 6 00:16:16.928933 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:16:16.930656 systemd-logind[1177]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:16:16.932373 systemd-logind[1177]: Removed session 2. Sep 6 00:16:16.987054 sshd[1265]: Accepted publickey for core from 147.75.109.163 port 53760 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:16.988264 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:16.995648 systemd-logind[1177]: New session 3 of user core. Sep 6 00:16:16.996215 systemd[1]: Started session-3.scope. Sep 6 00:16:17.069288 sshd[1265]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:17.073829 systemd[1]: sshd@2-159.223.206.243:22-147.75.109.163:53760.service: Deactivated successfully. Sep 6 00:16:17.073859 systemd-logind[1177]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:16:17.074905 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:16:17.076925 systemd-logind[1177]: Removed session 3. Sep 6 00:16:17.098663 systemd[1]: Started kubelet.service. Sep 6 00:16:17.100401 systemd[1]: Reached target multi-user.target. Sep 6 00:16:17.103703 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:16:17.114222 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:16:17.114401 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:16:17.115446 systemd[1]: Startup finished in 965ms (kernel) + 7.107s (initrd) + 7.344s (userspace) = 15.417s. Sep 6 00:16:17.772113 kubelet[1273]: E0906 00:16:17.772052 1273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:17.775055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:17.775274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:17.775640 systemd[1]: kubelet.service: Consumed 1.318s CPU time. Sep 6 00:16:27.075886 systemd[1]: Started sshd@3-159.223.206.243:22-147.75.109.163:38200.service. Sep 6 00:16:27.129089 sshd[1281]: Accepted publickey for core from 147.75.109.163 port 38200 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:27.130813 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:27.136752 systemd[1]: Started session-4.scope. Sep 6 00:16:27.137375 systemd-logind[1177]: New session 4 of user core. Sep 6 00:16:27.203934 sshd[1281]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:27.208942 systemd[1]: sshd@3-159.223.206.243:22-147.75.109.163:38200.service: Deactivated successfully. Sep 6 00:16:27.209745 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:16:27.210513 systemd-logind[1177]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:16:27.212448 systemd[1]: Started sshd@4-159.223.206.243:22-147.75.109.163:38206.service. Sep 6 00:16:27.214042 systemd-logind[1177]: Removed session 4. Sep 6 00:16:27.265925 sshd[1287]: Accepted publickey for core from 147.75.109.163 port 38206 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:27.268668 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:27.275641 systemd-logind[1177]: New session 5 of user core. Sep 6 00:16:27.275904 systemd[1]: Started session-5.scope. Sep 6 00:16:27.334417 sshd[1287]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:27.341433 systemd[1]: sshd@4-159.223.206.243:22-147.75.109.163:38206.service: Deactivated successfully. Sep 6 00:16:27.342577 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:16:27.343565 systemd-logind[1177]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:16:27.345818 systemd[1]: Started sshd@5-159.223.206.243:22-147.75.109.163:38210.service. Sep 6 00:16:27.349345 systemd-logind[1177]: Removed session 5. Sep 6 00:16:27.403203 sshd[1293]: Accepted publickey for core from 147.75.109.163 port 38210 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:27.405455 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:27.411604 systemd[1]: Started session-6.scope. Sep 6 00:16:27.412053 systemd-logind[1177]: New session 6 of user core. Sep 6 00:16:27.478603 sshd[1293]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:27.485086 systemd[1]: sshd@5-159.223.206.243:22-147.75.109.163:38210.service: Deactivated successfully. Sep 6 00:16:27.486145 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:16:27.486967 systemd-logind[1177]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:16:27.489076 systemd[1]: Started sshd@6-159.223.206.243:22-147.75.109.163:38214.service. Sep 6 00:16:27.492182 systemd-logind[1177]: Removed session 6. Sep 6 00:16:27.542849 sshd[1299]: Accepted publickey for core from 147.75.109.163 port 38214 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:16:27.545367 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:16:27.551372 systemd[1]: Started session-7.scope. Sep 6 00:16:27.552039 systemd-logind[1177]: New session 7 of user core. Sep 6 00:16:27.622581 sudo[1302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:16:27.623479 sudo[1302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:16:27.655187 systemd[1]: Starting docker.service... Sep 6 00:16:27.710203 env[1312]: time="2025-09-06T00:16:27.710121911Z" level=info msg="Starting up" Sep 6 00:16:27.713965 env[1312]: time="2025-09-06T00:16:27.713923537Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:16:27.714152 env[1312]: time="2025-09-06T00:16:27.714132997Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:16:27.714234 env[1312]: time="2025-09-06T00:16:27.714216456Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:16:27.714309 env[1312]: time="2025-09-06T00:16:27.714295548Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:16:27.716433 env[1312]: time="2025-09-06T00:16:27.716400459Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:16:27.716613 env[1312]: time="2025-09-06T00:16:27.716594772Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:16:27.716699 env[1312]: time="2025-09-06T00:16:27.716681390Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:16:27.716763 env[1312]: time="2025-09-06T00:16:27.716750301Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:16:27.723781 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1993061668-merged.mount: Deactivated successfully. Sep 6 00:16:27.746264 env[1312]: time="2025-09-06T00:16:27.746209943Z" level=info msg="Loading containers: start." Sep 6 00:16:27.858867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:16:27.859182 systemd[1]: Stopped kubelet.service. Sep 6 00:16:27.859244 systemd[1]: kubelet.service: Consumed 1.318s CPU time. Sep 6 00:16:27.861041 systemd[1]: Starting kubelet.service... Sep 6 00:16:27.920999 kernel: Initializing XFRM netlink socket Sep 6 00:16:27.973549 env[1312]: time="2025-09-06T00:16:27.973509524Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:16:27.998303 systemd[1]: Started kubelet.service. Sep 6 00:16:28.080893 systemd-networkd[1003]: docker0: Link UP Sep 6 00:16:28.097173 env[1312]: time="2025-09-06T00:16:28.097121521Z" level=info msg="Loading containers: done." Sep 6 00:16:28.098429 kubelet[1378]: E0906 00:16:28.098386 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:28.101766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:28.101917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:28.115467 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3966703799-merged.mount: Deactivated successfully. Sep 6 00:16:28.121007 env[1312]: time="2025-09-06T00:16:28.120557464Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:16:28.121007 env[1312]: time="2025-09-06T00:16:28.120797133Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:16:28.121007 env[1312]: time="2025-09-06T00:16:28.120909325Z" level=info msg="Daemon has completed initialization" Sep 6 00:16:28.134854 systemd[1]: Started docker.service. Sep 6 00:16:28.144300 env[1312]: time="2025-09-06T00:16:28.144224879Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:16:28.168925 systemd[1]: Starting coreos-metadata.service... Sep 6 00:16:28.213071 coreos-metadata[1438]: Sep 06 00:16:28.212 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 6 00:16:28.227140 coreos-metadata[1438]: Sep 06 00:16:28.226 INFO Fetch successful Sep 6 00:16:28.240807 systemd[1]: Finished coreos-metadata.service. Sep 6 00:16:29.236038 env[1184]: time="2025-09-06T00:16:29.235953674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:16:29.852007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286683453.mount: Deactivated successfully. Sep 6 00:16:31.447575 env[1184]: time="2025-09-06T00:16:31.447516761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:31.449269 env[1184]: time="2025-09-06T00:16:31.449217016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:31.452032 env[1184]: time="2025-09-06T00:16:31.451950513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:31.454375 env[1184]: time="2025-09-06T00:16:31.454319383Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:31.455895 env[1184]: time="2025-09-06T00:16:31.455831769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:16:31.458116 env[1184]: time="2025-09-06T00:16:31.457816648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:16:33.151648 env[1184]: time="2025-09-06T00:16:33.151589384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:33.153284 env[1184]: time="2025-09-06T00:16:33.153240248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:33.155322 env[1184]: time="2025-09-06T00:16:33.155282571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:33.157564 env[1184]: time="2025-09-06T00:16:33.157526875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:33.158611 env[1184]: time="2025-09-06T00:16:33.158576005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:16:33.159494 env[1184]: time="2025-09-06T00:16:33.159452102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:16:34.617593 env[1184]: time="2025-09-06T00:16:34.617519128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:34.619125 env[1184]: time="2025-09-06T00:16:34.619075412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:34.622651 env[1184]: time="2025-09-06T00:16:34.621477834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:34.624002 env[1184]: time="2025-09-06T00:16:34.623951695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:34.625033 env[1184]: time="2025-09-06T00:16:34.624917810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:16:34.625616 env[1184]: time="2025-09-06T00:16:34.625590876Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:16:35.918479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933278516.mount: Deactivated successfully. Sep 6 00:16:36.771801 env[1184]: time="2025-09-06T00:16:36.771730570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:36.773177 env[1184]: time="2025-09-06T00:16:36.773132290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:36.774631 env[1184]: time="2025-09-06T00:16:36.774588707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:36.776024 env[1184]: time="2025-09-06T00:16:36.775964743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:36.776670 env[1184]: time="2025-09-06T00:16:36.776623921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:16:36.777914 env[1184]: time="2025-09-06T00:16:36.777858209Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:16:37.282653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877571241.mount: Deactivated successfully. Sep 6 00:16:38.108914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:16:38.109175 systemd[1]: Stopped kubelet.service. Sep 6 00:16:38.111306 systemd[1]: Starting kubelet.service... Sep 6 00:16:38.250746 systemd[1]: Started kubelet.service. Sep 6 00:16:38.331400 kubelet[1461]: E0906 00:16:38.331343 1461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:16:38.334103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:16:38.334246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:16:38.438664 env[1184]: time="2025-09-06T00:16:38.438054265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.440469 env[1184]: time="2025-09-06T00:16:38.440420932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.442651 env[1184]: time="2025-09-06T00:16:38.442611392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.445624 env[1184]: time="2025-09-06T00:16:38.445571841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:16:38.446114 env[1184]: time="2025-09-06T00:16:38.446085303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:16:38.446426 env[1184]: time="2025-09-06T00:16:38.444627249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.927831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579084165.mount: Deactivated successfully. Sep 6 00:16:38.933212 env[1184]: time="2025-09-06T00:16:38.933154979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.934374 env[1184]: time="2025-09-06T00:16:38.934316355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.935828 env[1184]: time="2025-09-06T00:16:38.935757146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.937568 env[1184]: time="2025-09-06T00:16:38.937534415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:38.938057 env[1184]: time="2025-09-06T00:16:38.938004841Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:16:38.938540 env[1184]: time="2025-09-06T00:16:38.938506953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:16:39.496181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount982230605.mount: Deactivated successfully. Sep 6 00:16:42.031807 env[1184]: time="2025-09-06T00:16:42.031732463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:42.034032 env[1184]: time="2025-09-06T00:16:42.033495949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:42.035487 env[1184]: time="2025-09-06T00:16:42.035415365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:42.037397 env[1184]: time="2025-09-06T00:16:42.037356890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:42.038550 env[1184]: time="2025-09-06T00:16:42.038510807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:16:44.658834 systemd[1]: Stopped kubelet.service. Sep 6 00:16:44.664356 systemd[1]: Starting kubelet.service... Sep 6 00:16:44.712140 systemd[1]: Reloading. Sep 6 00:16:44.854583 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-09-06T00:16:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:44.854616 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-09-06T00:16:44Z" level=info msg="torcx already run" Sep 6 00:16:44.977900 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:44.977929 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:45.000962 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:45.110906 systemd[1]: Started kubelet.service. Sep 6 00:16:45.119347 systemd[1]: Stopping kubelet.service... Sep 6 00:16:45.120789 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:16:45.121329 systemd[1]: Stopped kubelet.service. Sep 6 00:16:45.125266 systemd[1]: Starting kubelet.service... Sep 6 00:16:45.279285 systemd[1]: Started kubelet.service. Sep 6 00:16:45.328647 kubelet[1571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:45.329092 kubelet[1571]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:16:45.329157 kubelet[1571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:45.329356 kubelet[1571]: I0906 00:16:45.329289 1571 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:16:45.794299 kubelet[1571]: I0906 00:16:45.794238 1571 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:16:45.794520 kubelet[1571]: I0906 00:16:45.794503 1571 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:16:45.795026 kubelet[1571]: I0906 00:16:45.795003 1571 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:16:45.823910 kubelet[1571]: E0906 00:16:45.823853 1571 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.223.206.243:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:45.826531 kubelet[1571]: I0906 00:16:45.826477 1571 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:16:45.835823 kubelet[1571]: E0906 00:16:45.835772 1571 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:16:45.836160 kubelet[1571]: I0906 00:16:45.836139 1571 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:16:45.843353 kubelet[1571]: I0906 00:16:45.843302 1571 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:16:45.844807 kubelet[1571]: I0906 00:16:45.844753 1571 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:16:45.845358 kubelet[1571]: I0906 00:16:45.845289 1571 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:16:45.845871 kubelet[1571]: I0906 00:16:45.845591 1571 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-f21ba72e96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:16:45.846166 kubelet[1571]: I0906 00:16:45.846143 1571 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:16:45.846288 kubelet[1571]: I0906 00:16:45.846270 1571 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:16:45.846574 kubelet[1571]: I0906 00:16:45.846546 1571 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:45.855734 kubelet[1571]: I0906 00:16:45.855668 1571 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:16:45.855734 kubelet[1571]: I0906 00:16:45.855735 1571 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:16:45.855969 kubelet[1571]: I0906 00:16:45.855818 1571 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:16:45.855969 kubelet[1571]: I0906 00:16:45.855850 1571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:16:45.860139 kubelet[1571]: W0906 00:16:45.860052 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.206.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f21ba72e96&limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:45.860139 kubelet[1571]: E0906 00:16:45.860135 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.206.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f21ba72e96&limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:45.860432 kubelet[1571]: W0906 00:16:45.860323 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.206.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:45.860432 kubelet[1571]: E0906 00:16:45.860359 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.206.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:45.861646 kubelet[1571]: I0906 00:16:45.860902 1571 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:16:45.861646 kubelet[1571]: I0906 00:16:45.861444 1571 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:16:45.861646 kubelet[1571]: W0906 00:16:45.861519 1571 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:16:45.864723 kubelet[1571]: I0906 00:16:45.864676 1571 server.go:1274] "Started kubelet" Sep 6 00:16:45.876914 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:16:45.877088 kubelet[1571]: I0906 00:16:45.877029 1571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:16:45.881578 kubelet[1571]: E0906 00:16:45.881546 1571 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:16:45.882021 kubelet[1571]: I0906 00:16:45.881966 1571 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:16:45.884165 kubelet[1571]: I0906 00:16:45.884125 1571 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:16:45.889637 kubelet[1571]: I0906 00:16:45.889561 1571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:16:45.890161 kubelet[1571]: I0906 00:16:45.890135 1571 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:16:45.891788 kubelet[1571]: I0906 00:16:45.891426 1571 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:16:45.891788 kubelet[1571]: E0906 00:16:45.891706 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:45.892073 kubelet[1571]: I0906 00:16:45.892050 1571 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:16:45.892140 kubelet[1571]: I0906 00:16:45.892117 1571 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:16:45.892366 kubelet[1571]: E0906 00:16:45.890484 1571 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.206.243:6443/api/v1/namespaces/default/events\": dial tcp 159.223.206.243:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-f21ba72e96.18628955b71337eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-f21ba72e96,UID:ci-3510.3.8-n-f21ba72e96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-f21ba72e96,},FirstTimestamp:2025-09-06 00:16:45.864630251 +0000 UTC m=+0.578768961,LastTimestamp:2025-09-06 00:16:45.864630251 +0000 UTC m=+0.578768961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-f21ba72e96,}" Sep 6 00:16:45.894513 kubelet[1571]: I0906 00:16:45.894481 1571 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:16:45.894755 kubelet[1571]: I0906 00:16:45.894732 1571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:16:45.895650 kubelet[1571]: W0906 00:16:45.895588 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.206.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:45.895884 kubelet[1571]: E0906 00:16:45.895856 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.206.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:45.896111 kubelet[1571]: E0906 00:16:45.896075 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f21ba72e96?timeout=10s\": dial tcp 159.223.206.243:6443: connect: connection refused" interval="200ms" Sep 6 00:16:45.897886 kubelet[1571]: I0906 00:16:45.897848 1571 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:16:45.904608 kubelet[1571]: I0906 00:16:45.904557 1571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:16:45.927758 kubelet[1571]: I0906 00:16:45.927725 1571 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:16:45.927758 kubelet[1571]: I0906 00:16:45.927743 1571 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:16:45.927758 kubelet[1571]: I0906 00:16:45.927766 1571 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:45.930238 kubelet[1571]: I0906 00:16:45.930183 1571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:16:45.930463 kubelet[1571]: I0906 00:16:45.930445 1571 policy_none.go:49] "None policy: Start" Sep 6 00:16:45.931490 kubelet[1571]: I0906 00:16:45.931466 1571 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:16:45.931606 kubelet[1571]: I0906 00:16:45.931508 1571 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:16:45.934802 kubelet[1571]: I0906 00:16:45.934760 1571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:16:45.934802 kubelet[1571]: I0906 00:16:45.934791 1571 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:16:45.934802 kubelet[1571]: I0906 00:16:45.934814 1571 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:16:45.935135 kubelet[1571]: E0906 00:16:45.934865 1571 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:16:45.936844 kubelet[1571]: W0906 00:16:45.936793 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.206.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:45.937118 kubelet[1571]: E0906 00:16:45.936852 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.206.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:45.940297 systemd[1]: Created slice kubepods.slice. Sep 6 00:16:45.946081 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:16:45.949830 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:16:45.958349 kubelet[1571]: I0906 00:16:45.958287 1571 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:16:45.958568 kubelet[1571]: I0906 00:16:45.958527 1571 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:16:45.958568 kubelet[1571]: I0906 00:16:45.958546 1571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:16:45.961016 kubelet[1571]: E0906 00:16:45.960942 1571 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:45.962560 kubelet[1571]: I0906 00:16:45.962536 1571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:16:46.045869 systemd[1]: Created slice kubepods-burstable-pode31e35362f335dd615b40192655a4ea1.slice. Sep 6 00:16:46.060469 kubelet[1571]: I0906 00:16:46.060405 1571 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.060966 kubelet[1571]: E0906 00:16:46.060934 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.206.243:6443/api/v1/nodes\": dial tcp 159.223.206.243:6443: connect: connection refused" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.066745 systemd[1]: Created slice kubepods-burstable-podfa77ed7f0483a0633a404caee7cf27ea.slice. Sep 6 00:16:46.080786 systemd[1]: Created slice kubepods-burstable-pod0029cda37d69ed2228c081cf0e91ce4b.slice. Sep 6 00:16:46.096835 kubelet[1571]: E0906 00:16:46.096772 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f21ba72e96?timeout=10s\": dial tcp 159.223.206.243:6443: connect: connection refused" interval="400ms" Sep 6 00:16:46.193826 kubelet[1571]: I0906 00:16:46.193766 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194133 kubelet[1571]: I0906 00:16:46.194100 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194315 kubelet[1571]: I0906 00:16:46.194295 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194467 kubelet[1571]: I0906 00:16:46.194445 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194674 kubelet[1571]: I0906 00:16:46.194653 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194791 kubelet[1571]: I0906 00:16:46.194774 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.194987 kubelet[1571]: I0906 00:16:46.194949 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa77ed7f0483a0633a404caee7cf27ea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-f21ba72e96\" (UID: \"fa77ed7f0483a0633a404caee7cf27ea\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.195087 kubelet[1571]: I0906 00:16:46.195072 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.195169 kubelet[1571]: I0906 00:16:46.195152 1571 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.263098 kubelet[1571]: I0906 00:16:46.263057 1571 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.263764 kubelet[1571]: E0906 00:16:46.263715 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.206.243:6443/api/v1/nodes\": dial tcp 159.223.206.243:6443: connect: connection refused" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.365954 kubelet[1571]: E0906 00:16:46.365903 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:46.368033 env[1184]: time="2025-09-06T00:16:46.367568632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-f21ba72e96,Uid:e31e35362f335dd615b40192655a4ea1,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:46.370281 kubelet[1571]: E0906 00:16:46.370245 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:46.372023 env[1184]: time="2025-09-06T00:16:46.371644989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-f21ba72e96,Uid:fa77ed7f0483a0633a404caee7cf27ea,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:46.386117 kubelet[1571]: E0906 00:16:46.386062 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:46.387890 env[1184]: time="2025-09-06T00:16:46.387841546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-f21ba72e96,Uid:0029cda37d69ed2228c081cf0e91ce4b,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:46.498500 kubelet[1571]: E0906 00:16:46.498433 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f21ba72e96?timeout=10s\": dial tcp 159.223.206.243:6443: connect: connection refused" interval="800ms" Sep 6 00:16:46.666585 kubelet[1571]: I0906 00:16:46.665839 1571 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.666585 kubelet[1571]: E0906 00:16:46.666188 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.206.243:6443/api/v1/nodes\": dial tcp 159.223.206.243:6443: connect: connection refused" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:46.844697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418907441.mount: Deactivated successfully. Sep 6 00:16:46.851694 env[1184]: time="2025-09-06T00:16:46.851620967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.857557 kubelet[1571]: W0906 00:16:46.857496 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.206.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:46.857715 kubelet[1571]: E0906 00:16:46.857584 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.206.243:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:46.860039 env[1184]: time="2025-09-06T00:16:46.859993926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.860871 env[1184]: time="2025-09-06T00:16:46.860835299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.861618 env[1184]: time="2025-09-06T00:16:46.861591304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.862382 env[1184]: time="2025-09-06T00:16:46.862355461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.863265 env[1184]: time="2025-09-06T00:16:46.863226978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.865596 env[1184]: time="2025-09-06T00:16:46.865555510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.868799 env[1184]: time="2025-09-06T00:16:46.868743530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.870441 env[1184]: time="2025-09-06T00:16:46.870369546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.873111 env[1184]: time="2025-09-06T00:16:46.873065664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.873784 env[1184]: time="2025-09-06T00:16:46.873755449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.874664 env[1184]: time="2025-09-06T00:16:46.874618548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:16:46.902483 env[1184]: time="2025-09-06T00:16:46.902381616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:46.902761 env[1184]: time="2025-09-06T00:16:46.902492936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:46.902761 env[1184]: time="2025-09-06T00:16:46.902518392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:46.903304 env[1184]: time="2025-09-06T00:16:46.903212503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8deb5938773ffe01c4cb965a382923199f5927eeaaf7b0adc34d29b21822df4b pid=1611 runtime=io.containerd.runc.v2 Sep 6 00:16:46.933039 env[1184]: time="2025-09-06T00:16:46.931991406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:46.934046 env[1184]: time="2025-09-06T00:16:46.933616266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:46.934046 env[1184]: time="2025-09-06T00:16:46.933844215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:46.935788 env[1184]: time="2025-09-06T00:16:46.935009293Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba69d8f28dcace3d90092758f43f2933d199f3843c8f134468d51fcb8c6cfba1 pid=1635 runtime=io.containerd.runc.v2 Sep 6 00:16:46.937933 systemd[1]: Started cri-containerd-8deb5938773ffe01c4cb965a382923199f5927eeaaf7b0adc34d29b21822df4b.scope. Sep 6 00:16:46.948260 env[1184]: time="2025-09-06T00:16:46.946764375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:46.948260 env[1184]: time="2025-09-06T00:16:46.946804487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:46.948260 env[1184]: time="2025-09-06T00:16:46.946815541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:46.948260 env[1184]: time="2025-09-06T00:16:46.946988810Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d168670e75491a524006feddb3a8be33ddd2e80aa18aea4c3d596b4a5255bf0e pid=1651 runtime=io.containerd.runc.v2 Sep 6 00:16:46.979458 systemd[1]: Started cri-containerd-d168670e75491a524006feddb3a8be33ddd2e80aa18aea4c3d596b4a5255bf0e.scope. Sep 6 00:16:46.997360 systemd[1]: Started cri-containerd-ba69d8f28dcace3d90092758f43f2933d199f3843c8f134468d51fcb8c6cfba1.scope. Sep 6 00:16:47.016054 kubelet[1571]: W0906 00:16:47.015939 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.206.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f21ba72e96&limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:47.016237 kubelet[1571]: E0906 00:16:47.016067 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.206.243:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-f21ba72e96&limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:47.054152 env[1184]: time="2025-09-06T00:16:47.054097260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-f21ba72e96,Uid:fa77ed7f0483a0633a404caee7cf27ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8deb5938773ffe01c4cb965a382923199f5927eeaaf7b0adc34d29b21822df4b\"" Sep 6 00:16:47.063146 kubelet[1571]: E0906 00:16:47.063097 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:47.068672 env[1184]: time="2025-09-06T00:16:47.066042886Z" level=info msg="CreateContainer within sandbox \"8deb5938773ffe01c4cb965a382923199f5927eeaaf7b0adc34d29b21822df4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:16:47.075586 env[1184]: time="2025-09-06T00:16:47.075532251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-f21ba72e96,Uid:0029cda37d69ed2228c081cf0e91ce4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba69d8f28dcace3d90092758f43f2933d199f3843c8f134468d51fcb8c6cfba1\"" Sep 6 00:16:47.076384 kubelet[1571]: E0906 00:16:47.076344 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:47.078324 env[1184]: time="2025-09-06T00:16:47.078279405Z" level=info msg="CreateContainer within sandbox \"ba69d8f28dcace3d90092758f43f2933d199f3843c8f134468d51fcb8c6cfba1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:16:47.083400 env[1184]: time="2025-09-06T00:16:47.083337384Z" level=info msg="CreateContainer within sandbox \"8deb5938773ffe01c4cb965a382923199f5927eeaaf7b0adc34d29b21822df4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1086504d946d1d3456c5d4c9410fcfe95c9d1628b5f8cb37401993946a92551b\"" Sep 6 00:16:47.084050 env[1184]: time="2025-09-06T00:16:47.084018406Z" level=info msg="StartContainer for \"1086504d946d1d3456c5d4c9410fcfe95c9d1628b5f8cb37401993946a92551b\"" Sep 6 00:16:47.098434 env[1184]: time="2025-09-06T00:16:47.098385077Z" level=info msg="CreateContainer within sandbox \"ba69d8f28dcace3d90092758f43f2933d199f3843c8f134468d51fcb8c6cfba1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ac08ee6253cb536b839ce2974fd46f89d3cecc709c61accd76847963e9b142f\"" Sep 6 00:16:47.100356 env[1184]: time="2025-09-06T00:16:47.100314568Z" level=info msg="StartContainer for \"3ac08ee6253cb536b839ce2974fd46f89d3cecc709c61accd76847963e9b142f\"" Sep 6 00:16:47.110313 env[1184]: time="2025-09-06T00:16:47.110252509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-f21ba72e96,Uid:e31e35362f335dd615b40192655a4ea1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d168670e75491a524006feddb3a8be33ddd2e80aa18aea4c3d596b4a5255bf0e\"" Sep 6 00:16:47.111569 kubelet[1571]: E0906 00:16:47.111539 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:47.113313 env[1184]: time="2025-09-06T00:16:47.113267280Z" level=info msg="CreateContainer within sandbox \"d168670e75491a524006feddb3a8be33ddd2e80aa18aea4c3d596b4a5255bf0e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:16:47.124428 systemd[1]: Started cri-containerd-1086504d946d1d3456c5d4c9410fcfe95c9d1628b5f8cb37401993946a92551b.scope. Sep 6 00:16:47.135752 env[1184]: time="2025-09-06T00:16:47.135699242Z" level=info msg="CreateContainer within sandbox \"d168670e75491a524006feddb3a8be33ddd2e80aa18aea4c3d596b4a5255bf0e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d01033db051058a9968601d0a619aab080aa382c2d9226083f623d60fd89d836\"" Sep 6 00:16:47.136513 env[1184]: time="2025-09-06T00:16:47.136480650Z" level=info msg="StartContainer for \"d01033db051058a9968601d0a619aab080aa382c2d9226083f623d60fd89d836\"" Sep 6 00:16:47.152039 systemd[1]: Started cri-containerd-3ac08ee6253cb536b839ce2974fd46f89d3cecc709c61accd76847963e9b142f.scope. Sep 6 00:16:47.178516 systemd[1]: Started cri-containerd-d01033db051058a9968601d0a619aab080aa382c2d9226083f623d60fd89d836.scope. Sep 6 00:16:47.234378 env[1184]: time="2025-09-06T00:16:47.234225439Z" level=info msg="StartContainer for \"1086504d946d1d3456c5d4c9410fcfe95c9d1628b5f8cb37401993946a92551b\" returns successfully" Sep 6 00:16:47.271138 env[1184]: time="2025-09-06T00:16:47.271082612Z" level=info msg="StartContainer for \"3ac08ee6253cb536b839ce2974fd46f89d3cecc709c61accd76847963e9b142f\" returns successfully" Sep 6 00:16:47.285156 env[1184]: time="2025-09-06T00:16:47.285096474Z" level=info msg="StartContainer for \"d01033db051058a9968601d0a619aab080aa382c2d9226083f623d60fd89d836\" returns successfully" Sep 6 00:16:47.299845 kubelet[1571]: E0906 00:16:47.299745 1571 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.206.243:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-f21ba72e96?timeout=10s\": dial tcp 159.223.206.243:6443: connect: connection refused" interval="1.6s" Sep 6 00:16:47.412672 kubelet[1571]: W0906 00:16:47.412603 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.206.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:47.413200 kubelet[1571]: E0906 00:16:47.413156 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.206.243:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:47.425930 kubelet[1571]: W0906 00:16:47.425856 1571 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.206.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.206.243:6443: connect: connection refused Sep 6 00:16:47.426202 kubelet[1571]: E0906 00:16:47.426177 1571 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.206.243:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.206.243:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:16:47.467610 kubelet[1571]: I0906 00:16:47.467576 1571 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:47.468256 kubelet[1571]: E0906 00:16:47.468226 1571 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.206.243:6443/api/v1/nodes\": dial tcp 159.223.206.243:6443: connect: connection refused" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:47.953769 kubelet[1571]: E0906 00:16:47.953734 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:47.958658 kubelet[1571]: E0906 00:16:47.958622 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:47.961557 kubelet[1571]: E0906 00:16:47.961524 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:48.963249 kubelet[1571]: E0906 00:16:48.963209 1571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:49.069707 kubelet[1571]: I0906 00:16:49.069671 1571 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:49.271507 kubelet[1571]: E0906 00:16:49.271366 1571 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-f21ba72e96\" not found" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:49.296004 kubelet[1571]: E0906 00:16:49.295876 1571 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-f21ba72e96.18628955b71337eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-f21ba72e96,UID:ci-3510.3.8-n-f21ba72e96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-f21ba72e96,},FirstTimestamp:2025-09-06 00:16:45.864630251 +0000 UTC m=+0.578768961,LastTimestamp:2025-09-06 00:16:45.864630251 +0000 UTC m=+0.578768961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-f21ba72e96,}" Sep 6 00:16:49.350174 kubelet[1571]: E0906 00:16:49.350047 1571 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-f21ba72e96.18628955b814feac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-f21ba72e96,UID:ci-3510.3.8-n-f21ba72e96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-f21ba72e96,},FirstTimestamp:2025-09-06 00:16:45.881523884 +0000 UTC m=+0.595662594,LastTimestamp:2025-09-06 00:16:45.881523884 +0000 UTC m=+0.595662594,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-f21ba72e96,}" Sep 6 00:16:49.363139 kubelet[1571]: I0906 00:16:49.363094 1571 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:49.363390 kubelet[1571]: E0906 00:16:49.363371 1571 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-f21ba72e96\": node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.379955 kubelet[1571]: E0906 00:16:49.379905 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.480762 kubelet[1571]: E0906 00:16:49.480718 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.581477 kubelet[1571]: E0906 00:16:49.581348 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.682601 kubelet[1571]: E0906 00:16:49.682488 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.783148 kubelet[1571]: E0906 00:16:49.783092 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.884463 kubelet[1571]: E0906 00:16:49.884397 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:49.985648 kubelet[1571]: E0906 00:16:49.985592 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:50.086398 kubelet[1571]: E0906 00:16:50.086337 1571 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:50.860196 kubelet[1571]: I0906 00:16:50.860156 1571 apiserver.go:52] "Watching apiserver" Sep 6 00:16:50.892501 kubelet[1571]: I0906 00:16:50.892461 1571 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:16:51.737963 systemd[1]: Reloading. Sep 6 00:16:51.858433 /usr/lib/systemd/system-generators/torcx-generator[1858]: time="2025-09-06T00:16:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:16:51.859082 /usr/lib/systemd/system-generators/torcx-generator[1858]: time="2025-09-06T00:16:51Z" level=info msg="torcx already run" Sep 6 00:16:51.971602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:16:51.971631 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:16:52.012412 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:16:52.205919 systemd[1]: Stopping kubelet.service... Sep 6 00:16:52.228781 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:16:52.229316 systemd[1]: Stopped kubelet.service. Sep 6 00:16:52.229564 systemd[1]: kubelet.service: Consumed 1.018s CPU time. Sep 6 00:16:52.235111 systemd[1]: Starting kubelet.service... Sep 6 00:16:53.273644 systemd[1]: Started kubelet.service. Sep 6 00:16:53.379002 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:53.379754 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:16:53.379893 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:16:53.382057 kubelet[1908]: I0906 00:16:53.381962 1908 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:16:53.403219 kubelet[1908]: I0906 00:16:53.403170 1908 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:16:53.403219 kubelet[1908]: I0906 00:16:53.403211 1908 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:16:53.403694 kubelet[1908]: I0906 00:16:53.403670 1908 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:16:53.408081 kubelet[1908]: I0906 00:16:53.408045 1908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:16:53.414075 sudo[1923]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:16:53.414337 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:16:53.414915 kubelet[1908]: I0906 00:16:53.414881 1908 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:16:53.422905 kubelet[1908]: E0906 00:16:53.422863 1908 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:16:53.422905 kubelet[1908]: I0906 00:16:53.422900 1908 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:16:53.428447 kubelet[1908]: I0906 00:16:53.428411 1908 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:16:53.428762 kubelet[1908]: I0906 00:16:53.428745 1908 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:16:53.429012 kubelet[1908]: I0906 00:16:53.428965 1908 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:16:53.430456 kubelet[1908]: I0906 00:16:53.429086 1908 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-f21ba72e96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:16:53.430661 kubelet[1908]: I0906 00:16:53.430645 1908 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:16:53.430735 kubelet[1908]: I0906 00:16:53.430724 1908 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:16:53.430825 kubelet[1908]: I0906 00:16:53.430814 1908 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:53.431087 kubelet[1908]: I0906 00:16:53.431072 1908 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:16:53.431191 kubelet[1908]: I0906 00:16:53.431178 1908 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:16:53.431306 kubelet[1908]: I0906 00:16:53.431295 1908 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:16:53.432004 kubelet[1908]: I0906 00:16:53.431985 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:16:53.434596 kubelet[1908]: I0906 00:16:53.434576 1908 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:16:53.440175 kubelet[1908]: I0906 00:16:53.440106 1908 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:16:53.449705 kubelet[1908]: I0906 00:16:53.449664 1908 server.go:1274] "Started kubelet" Sep 6 00:16:53.458822 kubelet[1908]: I0906 00:16:53.458783 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:16:53.471848 kubelet[1908]: I0906 00:16:53.471780 1908 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:16:53.473322 kubelet[1908]: I0906 00:16:53.473294 1908 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:16:53.479347 kubelet[1908]: I0906 00:16:53.479260 1908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:16:53.484908 kubelet[1908]: I0906 00:16:53.480316 1908 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:16:53.485717 kubelet[1908]: I0906 00:16:53.485684 1908 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:16:53.485970 kubelet[1908]: E0906 00:16:53.485929 1908 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-f21ba72e96\" not found" Sep 6 00:16:53.491753 kubelet[1908]: I0906 00:16:53.491717 1908 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:16:53.492071 kubelet[1908]: I0906 00:16:53.492047 1908 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:16:53.495074 kubelet[1908]: I0906 00:16:53.494929 1908 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:16:53.504006 kubelet[1908]: I0906 00:16:53.503960 1908 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:16:53.504006 kubelet[1908]: I0906 00:16:53.485694 1908 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:16:53.504214 kubelet[1908]: I0906 00:16:53.504201 1908 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:16:53.510475 kubelet[1908]: I0906 00:16:53.510414 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:16:53.512692 kubelet[1908]: I0906 00:16:53.512640 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:16:53.512692 kubelet[1908]: I0906 00:16:53.512700 1908 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:16:53.512874 kubelet[1908]: I0906 00:16:53.512724 1908 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:16:53.512874 kubelet[1908]: E0906 00:16:53.512792 1908 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:16:53.533603 kubelet[1908]: E0906 00:16:53.530672 1908 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:16:53.613900 kubelet[1908]: E0906 00:16:53.613843 1908 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:16:53.619787 kubelet[1908]: I0906 00:16:53.619757 1908 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:16:53.620070 kubelet[1908]: I0906 00:16:53.620051 1908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:16:53.620169 kubelet[1908]: I0906 00:16:53.620157 1908 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:16:53.620423 kubelet[1908]: I0906 00:16:53.620404 1908 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:16:53.620563 kubelet[1908]: I0906 00:16:53.620505 1908 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:16:53.620639 kubelet[1908]: I0906 00:16:53.620628 1908 policy_none.go:49] "None policy: Start" Sep 6 00:16:53.621450 kubelet[1908]: I0906 00:16:53.621434 1908 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:16:53.621569 kubelet[1908]: I0906 00:16:53.621558 1908 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:16:53.621784 kubelet[1908]: I0906 00:16:53.621770 1908 state_mem.go:75] "Updated machine memory state" Sep 6 00:16:53.626040 kubelet[1908]: I0906 00:16:53.626011 1908 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:16:53.626343 kubelet[1908]: I0906 00:16:53.626328 1908 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:16:53.626465 kubelet[1908]: I0906 00:16:53.626429 1908 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:16:53.626934 kubelet[1908]: I0906 00:16:53.626916 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:16:53.734203 kubelet[1908]: I0906 00:16:53.734167 1908 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.745329 kubelet[1908]: I0906 00:16:53.745292 1908 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.745610 kubelet[1908]: I0906 00:16:53.745595 1908 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.829229 kubelet[1908]: W0906 00:16:53.829118 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:53.829524 kubelet[1908]: W0906 00:16:53.829161 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:53.832749 kubelet[1908]: W0906 00:16:53.832709 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:53.915762 kubelet[1908]: I0906 00:16:53.915701 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.915762 kubelet[1908]: I0906 00:16:53.915758 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.915991 kubelet[1908]: I0906 00:16:53.915795 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.915991 kubelet[1908]: I0906 00:16:53.915833 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.915991 kubelet[1908]: I0906 00:16:53.915850 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.915991 kubelet[1908]: I0906 00:16:53.915951 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e31e35362f335dd615b40192655a4ea1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" (UID: \"e31e35362f335dd615b40192655a4ea1\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.916114 kubelet[1908]: I0906 00:16:53.915969 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.916114 kubelet[1908]: I0906 00:16:53.916013 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0029cda37d69ed2228c081cf0e91ce4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-f21ba72e96\" (UID: \"0029cda37d69ed2228c081cf0e91ce4b\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:53.916114 kubelet[1908]: I0906 00:16:53.916031 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa77ed7f0483a0633a404caee7cf27ea-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-f21ba72e96\" (UID: \"fa77ed7f0483a0633a404caee7cf27ea\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:54.130612 kubelet[1908]: E0906 00:16:54.130560 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.131109 kubelet[1908]: E0906 00:16:54.131083 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.133531 kubelet[1908]: E0906 00:16:54.133499 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.225441 sudo[1923]: pam_unix(sudo:session): session closed for user root Sep 6 00:16:54.453770 kubelet[1908]: I0906 00:16:54.453616 1908 apiserver.go:52] "Watching apiserver" Sep 6 00:16:54.504740 kubelet[1908]: I0906 00:16:54.504697 1908 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:16:54.566728 kubelet[1908]: E0906 00:16:54.561985 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.566728 kubelet[1908]: E0906 00:16:54.562959 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.575421 kubelet[1908]: W0906 00:16:54.575388 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 00:16:54.575732 kubelet[1908]: E0906 00:16:54.575690 1908 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-f21ba72e96\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" Sep 6 00:16:54.576043 kubelet[1908]: E0906 00:16:54.576027 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:54.617735 kubelet[1908]: I0906 00:16:54.617671 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-f21ba72e96" podStartSLOduration=1.6176506910000001 podStartE2EDuration="1.617650691s" podCreationTimestamp="2025-09-06 00:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:54.606392576 +0000 UTC m=+1.310877675" watchObservedRunningTime="2025-09-06 00:16:54.617650691 +0000 UTC m=+1.322135745" Sep 6 00:16:54.631825 kubelet[1908]: I0906 00:16:54.631759 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-f21ba72e96" podStartSLOduration=1.631738045 podStartE2EDuration="1.631738045s" podCreationTimestamp="2025-09-06 00:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:54.618547735 +0000 UTC m=+1.323032789" watchObservedRunningTime="2025-09-06 00:16:54.631738045 +0000 UTC m=+1.336223098" Sep 6 00:16:55.564797 kubelet[1908]: E0906 00:16:55.564749 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:56.117757 kubelet[1908]: I0906 00:16:56.117719 1908 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:16:56.118364 env[1184]: time="2025-09-06T00:16:56.118323641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:16:56.119178 kubelet[1908]: I0906 00:16:56.119144 1908 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:16:56.509093 sudo[1302]: pam_unix(sudo:session): session closed for user root Sep 6 00:16:56.512823 sshd[1299]: pam_unix(sshd:session): session closed for user core Sep 6 00:16:56.517320 systemd-logind[1177]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:16:56.518409 systemd[1]: sshd@6-159.223.206.243:22-147.75.109.163:38214.service: Deactivated successfully. Sep 6 00:16:56.519522 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:16:56.519698 systemd[1]: session-7.scope: Consumed 5.011s CPU time. Sep 6 00:16:56.520707 systemd-logind[1177]: Removed session 7. Sep 6 00:16:56.566945 kubelet[1908]: E0906 00:16:56.566892 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:57.068967 kubelet[1908]: I0906 00:16:57.068894 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-f21ba72e96" podStartSLOduration=4.06887165 podStartE2EDuration="4.06887165s" podCreationTimestamp="2025-09-06 00:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:54.632870389 +0000 UTC m=+1.337355443" watchObservedRunningTime="2025-09-06 00:16:57.06887165 +0000 UTC m=+3.773356705" Sep 6 00:16:57.073705 kubelet[1908]: W0906 00:16:57.073647 1908 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-n-f21ba72e96" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-f21ba72e96' and this object Sep 6 00:16:57.073918 kubelet[1908]: E0906 00:16:57.073720 1908 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-n-f21ba72e96\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f21ba72e96' and this object" logger="UnhandledError" Sep 6 00:16:57.077767 kubelet[1908]: W0906 00:16:57.077717 1908 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.8-n-f21ba72e96" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-f21ba72e96' and this object Sep 6 00:16:57.077961 kubelet[1908]: E0906 00:16:57.077794 1908 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.8-n-f21ba72e96\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-f21ba72e96' and this object" logger="UnhandledError" Sep 6 00:16:57.079381 systemd[1]: Created slice kubepods-besteffort-pod654d78cc_9a6a_48c0_9a08_f4427dae15ad.slice. Sep 6 00:16:57.107379 systemd[1]: Created slice kubepods-burstable-pod94312cb6_d25e_4877_8fe1_b9c714d1f2c0.slice. Sep 6 00:16:57.139723 kubelet[1908]: I0906 00:16:57.139616 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/654d78cc-9a6a-48c0-9a08-f4427dae15ad-kube-proxy\") pod \"kube-proxy-h9mqz\" (UID: \"654d78cc-9a6a-48c0-9a08-f4427dae15ad\") " pod="kube-system/kube-proxy-h9mqz" Sep 6 00:16:57.139948 kubelet[1908]: I0906 00:16:57.139749 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654d78cc-9a6a-48c0-9a08-f4427dae15ad-xtables-lock\") pod \"kube-proxy-h9mqz\" (UID: \"654d78cc-9a6a-48c0-9a08-f4427dae15ad\") " pod="kube-system/kube-proxy-h9mqz" Sep 6 00:16:57.139948 kubelet[1908]: I0906 00:16:57.139850 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7mr\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-kube-api-access-5m7mr\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.139948 kubelet[1908]: I0906 00:16:57.139916 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkcr\" (UniqueName: \"kubernetes.io/projected/654d78cc-9a6a-48c0-9a08-f4427dae15ad-kube-api-access-cnkcr\") pod \"kube-proxy-h9mqz\" (UID: \"654d78cc-9a6a-48c0-9a08-f4427dae15ad\") " pod="kube-system/kube-proxy-h9mqz" Sep 6 00:16:57.140151 kubelet[1908]: I0906 00:16:57.139949 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-clustermesh-secrets\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140151 kubelet[1908]: I0906 00:16:57.140026 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-kernel\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140151 kubelet[1908]: I0906 00:16:57.140101 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-run\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140295 kubelet[1908]: I0906 00:16:57.140176 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-etc-cni-netd\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140295 kubelet[1908]: I0906 00:16:57.140262 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-config-path\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140376 kubelet[1908]: I0906 00:16:57.140332 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654d78cc-9a6a-48c0-9a08-f4427dae15ad-lib-modules\") pod \"kube-proxy-h9mqz\" (UID: \"654d78cc-9a6a-48c0-9a08-f4427dae15ad\") " pod="kube-system/kube-proxy-h9mqz" Sep 6 00:16:57.140424 kubelet[1908]: I0906 00:16:57.140405 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-bpf-maps\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140530 kubelet[1908]: I0906 00:16:57.140471 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hostproc\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140530 kubelet[1908]: I0906 00:16:57.140505 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cni-path\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140645 kubelet[1908]: I0906 00:16:57.140579 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-lib-modules\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140645 kubelet[1908]: I0906 00:16:57.140606 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hubble-tls\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140783 kubelet[1908]: I0906 00:16:57.140754 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-cgroup\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.140897 kubelet[1908]: I0906 00:16:57.140839 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-xtables-lock\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.141014 kubelet[1908]: I0906 00:16:57.140959 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-net\") pod \"cilium-5wkmh\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " pod="kube-system/cilium-5wkmh" Sep 6 00:16:57.174117 systemd[1]: Created slice kubepods-besteffort-pod49084775_6173_4013_936a_32c631ffc705.slice. Sep 6 00:16:57.241601 kubelet[1908]: I0906 00:16:57.241558 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkl5\" (UniqueName: \"kubernetes.io/projected/49084775-6173-4013-936a-32c631ffc705-kube-api-access-mzkl5\") pod \"cilium-operator-5d85765b45-zlc4q\" (UID: \"49084775-6173-4013-936a-32c631ffc705\") " pod="kube-system/cilium-operator-5d85765b45-zlc4q" Sep 6 00:16:57.243888 kubelet[1908]: I0906 00:16:57.243831 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49084775-6173-4013-936a-32c631ffc705-cilium-config-path\") pod \"cilium-operator-5d85765b45-zlc4q\" (UID: \"49084775-6173-4013-936a-32c631ffc705\") " pod="kube-system/cilium-operator-5d85765b45-zlc4q" Sep 6 00:16:57.245155 kubelet[1908]: I0906 00:16:57.245113 1908 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:16:58.255652 kubelet[1908]: E0906 00:16:58.255577 1908 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 6 00:16:58.257260 kubelet[1908]: E0906 00:16:58.255704 1908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/654d78cc-9a6a-48c0-9a08-f4427dae15ad-kube-proxy podName:654d78cc-9a6a-48c0-9a08-f4427dae15ad nodeName:}" failed. No retries permitted until 2025-09-06 00:16:58.755675083 +0000 UTC m=+5.460160138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/654d78cc-9a6a-48c0-9a08-f4427dae15ad-kube-proxy") pod "kube-proxy-h9mqz" (UID: "654d78cc-9a6a-48c0-9a08-f4427dae15ad") : failed to sync configmap cache: timed out waiting for the condition Sep 6 00:16:58.311677 kubelet[1908]: E0906 00:16:58.311627 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.313460 env[1184]: time="2025-09-06T00:16:58.312870892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5wkmh,Uid:94312cb6-d25e-4877-8fe1-b9c714d1f2c0,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:58.334091 env[1184]: time="2025-09-06T00:16:58.333964277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:58.334091 env[1184]: time="2025-09-06T00:16:58.334038450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:58.334350 env[1184]: time="2025-09-06T00:16:58.334049177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:58.334418 env[1184]: time="2025-09-06T00:16:58.334397875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074 pid=1989 runtime=io.containerd.runc.v2 Sep 6 00:16:58.358845 systemd[1]: Started cri-containerd-2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074.scope. Sep 6 00:16:58.378290 kubelet[1908]: E0906 00:16:58.377785 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.378614 env[1184]: time="2025-09-06T00:16:58.378534018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zlc4q,Uid:49084775-6173-4013-936a-32c631ffc705,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:58.406267 env[1184]: time="2025-09-06T00:16:58.400791658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:58.406267 env[1184]: time="2025-09-06T00:16:58.400911340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:58.406267 env[1184]: time="2025-09-06T00:16:58.400939537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:58.406267 env[1184]: time="2025-09-06T00:16:58.401156618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6 pid=2023 runtime=io.containerd.runc.v2 Sep 6 00:16:58.409635 env[1184]: time="2025-09-06T00:16:58.409581245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5wkmh,Uid:94312cb6-d25e-4877-8fe1-b9c714d1f2c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\"" Sep 6 00:16:58.411139 kubelet[1908]: E0906 00:16:58.410612 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.412800 env[1184]: time="2025-09-06T00:16:58.412764871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:16:58.427193 systemd[1]: Started cri-containerd-5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6.scope. Sep 6 00:16:58.498662 env[1184]: time="2025-09-06T00:16:58.498615894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zlc4q,Uid:49084775-6173-4013-936a-32c631ffc705,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\"" Sep 6 00:16:58.499745 kubelet[1908]: E0906 00:16:58.499713 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.890658 kubelet[1908]: E0906 00:16:58.889547 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.892487 env[1184]: time="2025-09-06T00:16:58.892408739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9mqz,Uid:654d78cc-9a6a-48c0-9a08-f4427dae15ad,Namespace:kube-system,Attempt:0,}" Sep 6 00:16:58.910887 env[1184]: time="2025-09-06T00:16:58.910734251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:16:58.911199 env[1184]: time="2025-09-06T00:16:58.911161553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:16:58.911323 env[1184]: time="2025-09-06T00:16:58.911299154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:16:58.911824 env[1184]: time="2025-09-06T00:16:58.911771984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/754d51961612fcf153dab7ad69809664c3f556cf902480a1f24b7a93e24fc6ed pid=2068 runtime=io.containerd.runc.v2 Sep 6 00:16:58.931454 systemd[1]: Started cri-containerd-754d51961612fcf153dab7ad69809664c3f556cf902480a1f24b7a93e24fc6ed.scope. Sep 6 00:16:58.976050 env[1184]: time="2025-09-06T00:16:58.975925493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9mqz,Uid:654d78cc-9a6a-48c0-9a08-f4427dae15ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"754d51961612fcf153dab7ad69809664c3f556cf902480a1f24b7a93e24fc6ed\"" Sep 6 00:16:58.980005 kubelet[1908]: E0906 00:16:58.977598 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:58.990140 env[1184]: time="2025-09-06T00:16:58.988068025Z" level=info msg="CreateContainer within sandbox \"754d51961612fcf153dab7ad69809664c3f556cf902480a1f24b7a93e24fc6ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:16:59.022668 env[1184]: time="2025-09-06T00:16:59.022589730Z" level=info msg="CreateContainer within sandbox \"754d51961612fcf153dab7ad69809664c3f556cf902480a1f24b7a93e24fc6ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c6318e113c44703e7fb1285486413747b3ec75812f08b48b2d681c647c388ea\"" Sep 6 00:16:59.026899 env[1184]: time="2025-09-06T00:16:59.026823110Z" level=info msg="StartContainer for \"7c6318e113c44703e7fb1285486413747b3ec75812f08b48b2d681c647c388ea\"" Sep 6 00:16:59.058869 systemd[1]: Started cri-containerd-7c6318e113c44703e7fb1285486413747b3ec75812f08b48b2d681c647c388ea.scope. Sep 6 00:16:59.105321 env[1184]: time="2025-09-06T00:16:59.104955515Z" level=info msg="StartContainer for \"7c6318e113c44703e7fb1285486413747b3ec75812f08b48b2d681c647c388ea\" returns successfully" Sep 6 00:16:59.580446 kubelet[1908]: E0906 00:16:59.580396 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:16:59.603189 kubelet[1908]: I0906 00:16:59.603114 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9mqz" podStartSLOduration=2.603027506 podStartE2EDuration="2.603027506s" podCreationTimestamp="2025-09-06 00:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:16:59.602838696 +0000 UTC m=+6.307323751" watchObservedRunningTime="2025-09-06 00:16:59.603027506 +0000 UTC m=+6.307512560" Sep 6 00:17:00.466045 kubelet[1908]: E0906 00:17:00.465493 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:00.499297 kubelet[1908]: E0906 00:17:00.499224 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:00.582619 kubelet[1908]: E0906 00:17:00.581508 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:00.582619 kubelet[1908]: E0906 00:17:00.582160 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:00.700484 update_engine[1178]: I0906 00:17:00.699875 1178 update_attempter.cc:509] Updating boot flags... Sep 6 00:17:01.588164 kubelet[1908]: E0906 00:17:01.587760 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:02.089388 kubelet[1908]: E0906 00:17:02.089079 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:02.589322 kubelet[1908]: E0906 00:17:02.589279 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:04.777951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742365706.mount: Deactivated successfully. Sep 6 00:17:07.977211 env[1184]: time="2025-09-06T00:17:07.977020867Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:07.979244 env[1184]: time="2025-09-06T00:17:07.979189004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:07.980767 env[1184]: time="2025-09-06T00:17:07.980727893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:07.981643 env[1184]: time="2025-09-06T00:17:07.981603466Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:17:07.987131 env[1184]: time="2025-09-06T00:17:07.987031749Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:17:07.993838 env[1184]: time="2025-09-06T00:17:07.993727601Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:17:08.013362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836452734.mount: Deactivated successfully. Sep 6 00:17:08.025329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584847018.mount: Deactivated successfully. Sep 6 00:17:08.029362 env[1184]: time="2025-09-06T00:17:08.029231911Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\"" Sep 6 00:17:08.033676 env[1184]: time="2025-09-06T00:17:08.033630120Z" level=info msg="StartContainer for \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\"" Sep 6 00:17:08.063539 systemd[1]: Started cri-containerd-2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a.scope. Sep 6 00:17:08.119008 env[1184]: time="2025-09-06T00:17:08.118921285Z" level=info msg="StartContainer for \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\" returns successfully" Sep 6 00:17:08.129939 systemd[1]: cri-containerd-2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a.scope: Deactivated successfully. Sep 6 00:17:08.182331 env[1184]: time="2025-09-06T00:17:08.182269255Z" level=info msg="shim disconnected" id=2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a Sep 6 00:17:08.182895 env[1184]: time="2025-09-06T00:17:08.182865795Z" level=warning msg="cleaning up after shim disconnected" id=2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a namespace=k8s.io Sep 6 00:17:08.183083 env[1184]: time="2025-09-06T00:17:08.183059332Z" level=info msg="cleaning up dead shim" Sep 6 00:17:08.194809 env[1184]: time="2025-09-06T00:17:08.194720555Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" Sep 6 00:17:08.603014 kubelet[1908]: E0906 00:17:08.602884 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:08.613418 env[1184]: time="2025-09-06T00:17:08.613374032Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:17:08.637498 env[1184]: time="2025-09-06T00:17:08.637408825Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\"" Sep 6 00:17:08.639413 env[1184]: time="2025-09-06T00:17:08.638145240Z" level=info msg="StartContainer for \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\"" Sep 6 00:17:08.662562 systemd[1]: Started cri-containerd-acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944.scope. Sep 6 00:17:08.718225 env[1184]: time="2025-09-06T00:17:08.718150698Z" level=info msg="StartContainer for \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\" returns successfully" Sep 6 00:17:08.733584 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:17:08.734098 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:17:08.734324 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:17:08.738083 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:17:08.750132 systemd[1]: cri-containerd-acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944.scope: Deactivated successfully. Sep 6 00:17:08.762817 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:17:08.786653 env[1184]: time="2025-09-06T00:17:08.786588419Z" level=info msg="shim disconnected" id=acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944 Sep 6 00:17:08.787220 env[1184]: time="2025-09-06T00:17:08.787179648Z" level=warning msg="cleaning up after shim disconnected" id=acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944 namespace=k8s.io Sep 6 00:17:08.787415 env[1184]: time="2025-09-06T00:17:08.787389832Z" level=info msg="cleaning up dead shim" Sep 6 00:17:08.799768 env[1184]: time="2025-09-06T00:17:08.799699521Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2399 runtime=io.containerd.runc.v2\n" Sep 6 00:17:09.009450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a-rootfs.mount: Deactivated successfully. Sep 6 00:17:09.223924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626484042.mount: Deactivated successfully. Sep 6 00:17:09.605713 kubelet[1908]: E0906 00:17:09.605667 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:09.644954 env[1184]: time="2025-09-06T00:17:09.644896840Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:17:09.661415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815288208.mount: Deactivated successfully. Sep 6 00:17:09.667641 env[1184]: time="2025-09-06T00:17:09.667587947Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\"" Sep 6 00:17:09.670145 env[1184]: time="2025-09-06T00:17:09.670103153Z" level=info msg="StartContainer for \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\"" Sep 6 00:17:09.701719 systemd[1]: Started cri-containerd-035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b.scope. Sep 6 00:17:09.787842 systemd[1]: cri-containerd-035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b.scope: Deactivated successfully. Sep 6 00:17:09.791809 env[1184]: time="2025-09-06T00:17:09.791714969Z" level=info msg="StartContainer for \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\" returns successfully" Sep 6 00:17:09.832797 env[1184]: time="2025-09-06T00:17:09.832726771Z" level=info msg="shim disconnected" id=035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b Sep 6 00:17:09.832797 env[1184]: time="2025-09-06T00:17:09.832796675Z" level=warning msg="cleaning up after shim disconnected" id=035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b namespace=k8s.io Sep 6 00:17:09.833116 env[1184]: time="2025-09-06T00:17:09.832812595Z" level=info msg="cleaning up dead shim" Sep 6 00:17:09.859947 env[1184]: time="2025-09-06T00:17:09.859879315Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Sep 6 00:17:10.211578 env[1184]: time="2025-09-06T00:17:10.211402378Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.215012 env[1184]: time="2025-09-06T00:17:10.214941473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.216725 env[1184]: time="2025-09-06T00:17:10.216680558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:17:10.217173 env[1184]: time="2025-09-06T00:17:10.217127549Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:17:10.222153 env[1184]: time="2025-09-06T00:17:10.221447439Z" level=info msg="CreateContainer within sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:17:10.240259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773751275.mount: Deactivated successfully. Sep 6 00:17:10.246829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000005857.mount: Deactivated successfully. Sep 6 00:17:10.251431 env[1184]: time="2025-09-06T00:17:10.251375257Z" level=info msg="CreateContainer within sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\"" Sep 6 00:17:10.254297 env[1184]: time="2025-09-06T00:17:10.252668394Z" level=info msg="StartContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\"" Sep 6 00:17:10.274698 systemd[1]: Started cri-containerd-58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044.scope. Sep 6 00:17:10.328847 env[1184]: time="2025-09-06T00:17:10.328788339Z" level=info msg="StartContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" returns successfully" Sep 6 00:17:10.609347 kubelet[1908]: E0906 00:17:10.609317 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:10.612701 kubelet[1908]: E0906 00:17:10.612667 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:10.617470 env[1184]: time="2025-09-06T00:17:10.617424389Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:17:10.634175 env[1184]: time="2025-09-06T00:17:10.634113852Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\"" Sep 6 00:17:10.635042 env[1184]: time="2025-09-06T00:17:10.635010424Z" level=info msg="StartContainer for \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\"" Sep 6 00:17:10.657464 kubelet[1908]: I0906 00:17:10.657391 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-zlc4q" podStartSLOduration=1.940782161 podStartE2EDuration="13.657367015s" podCreationTimestamp="2025-09-06 00:16:57 +0000 UTC" firstStartedPulling="2025-09-06 00:16:58.501523762 +0000 UTC m=+5.206008797" lastFinishedPulling="2025-09-06 00:17:10.218108619 +0000 UTC m=+16.922593651" observedRunningTime="2025-09-06 00:17:10.648033014 +0000 UTC m=+17.352518064" watchObservedRunningTime="2025-09-06 00:17:10.657367015 +0000 UTC m=+17.361852069" Sep 6 00:17:10.675160 systemd[1]: Started cri-containerd-186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1.scope. Sep 6 00:17:10.754229 env[1184]: time="2025-09-06T00:17:10.754150354Z" level=info msg="StartContainer for \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\" returns successfully" Sep 6 00:17:10.761211 systemd[1]: cri-containerd-186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1.scope: Deactivated successfully. Sep 6 00:17:10.792671 env[1184]: time="2025-09-06T00:17:10.792604929Z" level=info msg="shim disconnected" id=186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1 Sep 6 00:17:10.793294 env[1184]: time="2025-09-06T00:17:10.793243689Z" level=warning msg="cleaning up after shim disconnected" id=186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1 namespace=k8s.io Sep 6 00:17:10.793516 env[1184]: time="2025-09-06T00:17:10.793490654Z" level=info msg="cleaning up dead shim" Sep 6 00:17:10.807326 env[1184]: time="2025-09-06T00:17:10.807269348Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:17:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2553 runtime=io.containerd.runc.v2\n" Sep 6 00:17:11.617907 kubelet[1908]: E0906 00:17:11.617873 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:11.618606 kubelet[1908]: E0906 00:17:11.617899 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:11.623532 env[1184]: time="2025-09-06T00:17:11.623477041Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:17:11.649138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132118509.mount: Deactivated successfully. Sep 6 00:17:11.663816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760346399.mount: Deactivated successfully. Sep 6 00:17:11.671763 env[1184]: time="2025-09-06T00:17:11.671693465Z" level=info msg="CreateContainer within sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\"" Sep 6 00:17:11.675176 env[1184]: time="2025-09-06T00:17:11.674614526Z" level=info msg="StartContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\"" Sep 6 00:17:11.698174 systemd[1]: Started cri-containerd-dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a.scope. Sep 6 00:17:11.757815 env[1184]: time="2025-09-06T00:17:11.757736744Z" level=info msg="StartContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" returns successfully" Sep 6 00:17:11.975667 kubelet[1908]: I0906 00:17:11.975510 1908 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:17:12.034451 systemd[1]: Created slice kubepods-burstable-pod3056af5d_5e5f_48bc_a5ba_9a5622009766.slice. Sep 6 00:17:12.059899 systemd[1]: Created slice kubepods-burstable-pod235e250b_637e_435f_9aaf_c555cc6317d6.slice. Sep 6 00:17:12.069771 kubelet[1908]: I0906 00:17:12.069717 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3056af5d-5e5f-48bc-a5ba-9a5622009766-config-volume\") pod \"coredns-7c65d6cfc9-zb662\" (UID: \"3056af5d-5e5f-48bc-a5ba-9a5622009766\") " pod="kube-system/coredns-7c65d6cfc9-zb662" Sep 6 00:17:12.070171 kubelet[1908]: I0906 00:17:12.070132 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/235e250b-637e-435f-9aaf-c555cc6317d6-config-volume\") pod \"coredns-7c65d6cfc9-89m5b\" (UID: \"235e250b-637e-435f-9aaf-c555cc6317d6\") " pod="kube-system/coredns-7c65d6cfc9-89m5b" Sep 6 00:17:12.070390 kubelet[1908]: I0906 00:17:12.070362 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll6d\" (UniqueName: \"kubernetes.io/projected/235e250b-637e-435f-9aaf-c555cc6317d6-kube-api-access-lll6d\") pod \"coredns-7c65d6cfc9-89m5b\" (UID: \"235e250b-637e-435f-9aaf-c555cc6317d6\") " pod="kube-system/coredns-7c65d6cfc9-89m5b" Sep 6 00:17:12.070574 kubelet[1908]: I0906 00:17:12.070549 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8tm\" (UniqueName: \"kubernetes.io/projected/3056af5d-5e5f-48bc-a5ba-9a5622009766-kube-api-access-gl8tm\") pod \"coredns-7c65d6cfc9-zb662\" (UID: \"3056af5d-5e5f-48bc-a5ba-9a5622009766\") " pod="kube-system/coredns-7c65d6cfc9-zb662" Sep 6 00:17:12.354148 kubelet[1908]: E0906 00:17:12.354035 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:12.356208 env[1184]: time="2025-09-06T00:17:12.355769914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zb662,Uid:3056af5d-5e5f-48bc-a5ba-9a5622009766,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:12.363478 kubelet[1908]: E0906 00:17:12.363406 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:12.364887 env[1184]: time="2025-09-06T00:17:12.364573362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-89m5b,Uid:235e250b-637e-435f-9aaf-c555cc6317d6,Namespace:kube-system,Attempt:0,}" Sep 6 00:17:12.623378 kubelet[1908]: E0906 00:17:12.623251 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:12.649599 kubelet[1908]: I0906 00:17:12.649091 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5wkmh" podStartSLOduration=6.07783503 podStartE2EDuration="15.649065683s" podCreationTimestamp="2025-09-06 00:16:57 +0000 UTC" firstStartedPulling="2025-09-06 00:16:58.412243536 +0000 UTC m=+5.116728573" lastFinishedPulling="2025-09-06 00:17:07.983474178 +0000 UTC m=+14.687959226" observedRunningTime="2025-09-06 00:17:12.64784999 +0000 UTC m=+19.352335051" watchObservedRunningTime="2025-09-06 00:17:12.649065683 +0000 UTC m=+19.353550738" Sep 6 00:17:13.626183 kubelet[1908]: E0906 00:17:13.625966 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:14.210373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:17:14.210504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:17:14.210922 systemd-networkd[1003]: cilium_host: Link UP Sep 6 00:17:14.211274 systemd-networkd[1003]: cilium_net: Link UP Sep 6 00:17:14.211567 systemd-networkd[1003]: cilium_net: Gained carrier Sep 6 00:17:14.211844 systemd-networkd[1003]: cilium_host: Gained carrier Sep 6 00:17:14.261231 systemd-networkd[1003]: cilium_host: Gained IPv6LL Sep 6 00:17:14.389368 systemd-networkd[1003]: cilium_vxlan: Link UP Sep 6 00:17:14.389599 systemd-networkd[1003]: cilium_vxlan: Gained carrier Sep 6 00:17:14.628942 kubelet[1908]: E0906 00:17:14.628891 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:14.864009 kernel: NET: Registered PF_ALG protocol family Sep 6 00:17:15.184143 systemd-networkd[1003]: cilium_net: Gained IPv6LL Sep 6 00:17:15.629770 kubelet[1908]: E0906 00:17:15.629729 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:15.763098 systemd-networkd[1003]: lxc_health: Link UP Sep 6 00:17:15.777525 systemd-networkd[1003]: lxc_health: Gained carrier Sep 6 00:17:15.778106 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:17:15.938727 systemd-networkd[1003]: lxcaba955d84994: Link UP Sep 6 00:17:15.945037 kernel: eth0: renamed from tmpc8507 Sep 6 00:17:15.954247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaba955d84994: link becomes ready Sep 6 00:17:15.953305 systemd-networkd[1003]: lxcaba955d84994: Gained carrier Sep 6 00:17:15.958432 systemd-networkd[1003]: lxccb96bcd04abd: Link UP Sep 6 00:17:15.963084 kernel: eth0: renamed from tmpff745 Sep 6 00:17:15.971729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccb96bcd04abd: link becomes ready Sep 6 00:17:15.969872 systemd-networkd[1003]: lxccb96bcd04abd: Gained carrier Sep 6 00:17:16.076212 systemd-networkd[1003]: cilium_vxlan: Gained IPv6LL Sep 6 00:17:16.631994 kubelet[1908]: E0906 00:17:16.631937 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:17.227427 systemd-networkd[1003]: lxc_health: Gained IPv6LL Sep 6 00:17:17.633812 kubelet[1908]: E0906 00:17:17.633754 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:17.868302 systemd-networkd[1003]: lxcaba955d84994: Gained IPv6LL Sep 6 00:17:17.995584 systemd-networkd[1003]: lxccb96bcd04abd: Gained IPv6LL Sep 6 00:17:20.671114 env[1184]: time="2025-09-06T00:17:20.668338835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:20.671114 env[1184]: time="2025-09-06T00:17:20.668427581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:20.671114 env[1184]: time="2025-09-06T00:17:20.668441742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:20.671114 env[1184]: time="2025-09-06T00:17:20.668589909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff7451c888d7a0d868f5af6202d68be722fc15d51e3a0f1fc514bd0c1678cc7c pid=3114 runtime=io.containerd.runc.v2 Sep 6 00:17:20.680866 env[1184]: time="2025-09-06T00:17:20.672384297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:17:20.680866 env[1184]: time="2025-09-06T00:17:20.672439888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:17:20.680866 env[1184]: time="2025-09-06T00:17:20.672450590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:17:20.680866 env[1184]: time="2025-09-06T00:17:20.673383739Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451 pid=3118 runtime=io.containerd.runc.v2 Sep 6 00:17:20.724133 systemd[1]: Started cri-containerd-ff7451c888d7a0d868f5af6202d68be722fc15d51e3a0f1fc514bd0c1678cc7c.scope. Sep 6 00:17:20.775215 systemd[1]: run-containerd-runc-k8s.io-c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451-runc.HmqyRm.mount: Deactivated successfully. Sep 6 00:17:20.777832 systemd[1]: Started cri-containerd-c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451.scope. Sep 6 00:17:20.828517 env[1184]: time="2025-09-06T00:17:20.828456217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-89m5b,Uid:235e250b-637e-435f-9aaf-c555cc6317d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff7451c888d7a0d868f5af6202d68be722fc15d51e3a0f1fc514bd0c1678cc7c\"" Sep 6 00:17:20.830507 kubelet[1908]: E0906 00:17:20.830100 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:20.835408 env[1184]: time="2025-09-06T00:17:20.835359355Z" level=info msg="CreateContainer within sandbox \"ff7451c888d7a0d868f5af6202d68be722fc15d51e3a0f1fc514bd0c1678cc7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:20.854433 env[1184]: time="2025-09-06T00:17:20.854356139Z" level=info msg="CreateContainer within sandbox \"ff7451c888d7a0d868f5af6202d68be722fc15d51e3a0f1fc514bd0c1678cc7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5730a43cd0b58a784a60fbf3718b20bbed0ef8e4b8e1597452072ec71b118749\"" Sep 6 00:17:20.855634 env[1184]: time="2025-09-06T00:17:20.855599349Z" level=info msg="StartContainer for \"5730a43cd0b58a784a60fbf3718b20bbed0ef8e4b8e1597452072ec71b118749\"" Sep 6 00:17:20.902676 systemd[1]: Started cri-containerd-5730a43cd0b58a784a60fbf3718b20bbed0ef8e4b8e1597452072ec71b118749.scope. Sep 6 00:17:20.907426 env[1184]: time="2025-09-06T00:17:20.907341813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zb662,Uid:3056af5d-5e5f-48bc-a5ba-9a5622009766,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451\"" Sep 6 00:17:20.908637 kubelet[1908]: E0906 00:17:20.908434 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:20.917257 env[1184]: time="2025-09-06T00:17:20.917197114Z" level=info msg="CreateContainer within sandbox \"c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:17:20.931522 env[1184]: time="2025-09-06T00:17:20.931372512Z" level=info msg="CreateContainer within sandbox \"c8507325aa3b3b43c179069cfafd7ed9f195798a290627afacdbb7214b8a6451\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1aefe3186539ad0953f9015b5ed4fe49a2a5a040740efc4816bccb5fefde298b\"" Sep 6 00:17:20.932553 env[1184]: time="2025-09-06T00:17:20.932500469Z" level=info msg="StartContainer for \"1aefe3186539ad0953f9015b5ed4fe49a2a5a040740efc4816bccb5fefde298b\"" Sep 6 00:17:20.966471 systemd[1]: Started cri-containerd-1aefe3186539ad0953f9015b5ed4fe49a2a5a040740efc4816bccb5fefde298b.scope. Sep 6 00:17:20.995841 env[1184]: time="2025-09-06T00:17:20.995774950Z" level=info msg="StartContainer for \"5730a43cd0b58a784a60fbf3718b20bbed0ef8e4b8e1597452072ec71b118749\" returns successfully" Sep 6 00:17:21.029913 env[1184]: time="2025-09-06T00:17:21.029825262Z" level=info msg="StartContainer for \"1aefe3186539ad0953f9015b5ed4fe49a2a5a040740efc4816bccb5fefde298b\" returns successfully" Sep 6 00:17:21.653570 kubelet[1908]: E0906 00:17:21.652666 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:21.661016 kubelet[1908]: E0906 00:17:21.659783 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:21.705825 kubelet[1908]: I0906 00:17:21.705738 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-89m5b" podStartSLOduration=24.705707796 podStartE2EDuration="24.705707796s" podCreationTimestamp="2025-09-06 00:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:21.685717754 +0000 UTC m=+28.390202857" watchObservedRunningTime="2025-09-06 00:17:21.705707796 +0000 UTC m=+28.410192851" Sep 6 00:17:22.661826 kubelet[1908]: E0906 00:17:22.661763 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:22.662569 kubelet[1908]: E0906 00:17:22.662536 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:23.664242 kubelet[1908]: E0906 00:17:23.664210 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:23.664849 kubelet[1908]: E0906 00:17:23.664487 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:17:36.018157 systemd[1]: Started sshd@7-159.223.206.243:22-147.75.109.163:53010.service. Sep 6 00:17:36.089874 sshd[3271]: Accepted publickey for core from 147.75.109.163 port 53010 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:36.093200 sshd[3271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:36.103503 systemd[1]: Started session-8.scope. Sep 6 00:17:36.104349 systemd-logind[1177]: New session 8 of user core. Sep 6 00:17:36.335609 sshd[3271]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:36.340220 systemd[1]: sshd@7-159.223.206.243:22-147.75.109.163:53010.service: Deactivated successfully. Sep 6 00:17:36.341228 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:17:36.342121 systemd-logind[1177]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:17:36.343726 systemd-logind[1177]: Removed session 8. Sep 6 00:17:41.343709 systemd[1]: Started sshd@8-159.223.206.243:22-147.75.109.163:33834.service. Sep 6 00:17:41.396870 sshd[3284]: Accepted publickey for core from 147.75.109.163 port 33834 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:41.399877 sshd[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:41.407107 systemd-logind[1177]: New session 9 of user core. Sep 6 00:17:41.407400 systemd[1]: Started session-9.scope. Sep 6 00:17:41.591716 sshd[3284]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:41.596275 systemd-logind[1177]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:17:41.596533 systemd[1]: sshd@8-159.223.206.243:22-147.75.109.163:33834.service: Deactivated successfully. Sep 6 00:17:41.597396 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:17:41.599329 systemd-logind[1177]: Removed session 9. Sep 6 00:17:46.601010 systemd[1]: Started sshd@9-159.223.206.243:22-147.75.109.163:33844.service. Sep 6 00:17:46.662629 sshd[3297]: Accepted publickey for core from 147.75.109.163 port 33844 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:46.665724 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:46.673843 systemd-logind[1177]: New session 10 of user core. Sep 6 00:17:46.674443 systemd[1]: Started session-10.scope. Sep 6 00:17:46.882939 sshd[3297]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:46.888725 systemd[1]: sshd@9-159.223.206.243:22-147.75.109.163:33844.service: Deactivated successfully. Sep 6 00:17:46.890056 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:17:46.891408 systemd-logind[1177]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:17:46.893324 systemd-logind[1177]: Removed session 10. Sep 6 00:17:51.888204 systemd[1]: Started sshd@10-159.223.206.243:22-147.75.109.163:40456.service. Sep 6 00:17:51.953420 sshd[3311]: Accepted publickey for core from 147.75.109.163 port 40456 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:51.954522 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:51.966767 systemd-logind[1177]: New session 11 of user core. Sep 6 00:17:51.967435 systemd[1]: Started session-11.scope. Sep 6 00:17:52.135564 sshd[3311]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:52.139016 systemd-logind[1177]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:17:52.139403 systemd[1]: sshd@10-159.223.206.243:22-147.75.109.163:40456.service: Deactivated successfully. Sep 6 00:17:52.140211 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:17:52.141210 systemd-logind[1177]: Removed session 11. Sep 6 00:17:57.143374 systemd[1]: Started sshd@11-159.223.206.243:22-147.75.109.163:40460.service. Sep 6 00:17:57.198054 sshd[3325]: Accepted publickey for core from 147.75.109.163 port 40460 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:57.200798 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:57.207782 systemd-logind[1177]: New session 12 of user core. Sep 6 00:17:57.208089 systemd[1]: Started session-12.scope. Sep 6 00:17:57.360262 sshd[3325]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:57.368895 systemd[1]: Started sshd@12-159.223.206.243:22-147.75.109.163:40462.service. Sep 6 00:17:57.370554 systemd[1]: sshd@11-159.223.206.243:22-147.75.109.163:40460.service: Deactivated successfully. Sep 6 00:17:57.371600 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:17:57.373155 systemd-logind[1177]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:17:57.374624 systemd-logind[1177]: Removed session 12. Sep 6 00:17:57.441910 sshd[3336]: Accepted publickey for core from 147.75.109.163 port 40462 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:57.444222 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:57.450226 systemd[1]: Started session-13.scope. Sep 6 00:17:57.450826 systemd-logind[1177]: New session 13 of user core. Sep 6 00:17:57.666926 sshd[3336]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:57.673580 systemd[1]: Started sshd@13-159.223.206.243:22-147.75.109.163:40470.service. Sep 6 00:17:57.684084 systemd[1]: sshd@12-159.223.206.243:22-147.75.109.163:40462.service: Deactivated successfully. Sep 6 00:17:57.685726 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:17:57.687948 systemd-logind[1177]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:17:57.691891 systemd-logind[1177]: Removed session 13. Sep 6 00:17:57.746708 sshd[3346]: Accepted publickey for core from 147.75.109.163 port 40470 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:17:57.749076 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:17:57.755213 systemd-logind[1177]: New session 14 of user core. Sep 6 00:17:57.755306 systemd[1]: Started session-14.scope. Sep 6 00:17:57.927967 sshd[3346]: pam_unix(sshd:session): session closed for user core Sep 6 00:17:57.931314 systemd-logind[1177]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:17:57.931704 systemd[1]: sshd@13-159.223.206.243:22-147.75.109.163:40470.service: Deactivated successfully. Sep 6 00:17:57.932869 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:17:57.933881 systemd-logind[1177]: Removed session 14. Sep 6 00:18:02.936806 systemd[1]: Started sshd@14-159.223.206.243:22-147.75.109.163:47086.service. Sep 6 00:18:02.989922 sshd[3360]: Accepted publickey for core from 147.75.109.163 port 47086 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:02.992698 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:02.998537 systemd-logind[1177]: New session 15 of user core. Sep 6 00:18:03.000248 systemd[1]: Started session-15.scope. Sep 6 00:18:03.156758 sshd[3360]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:03.160437 systemd[1]: sshd@14-159.223.206.243:22-147.75.109.163:47086.service: Deactivated successfully. Sep 6 00:18:03.161591 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:18:03.162871 systemd-logind[1177]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:18:03.164011 systemd-logind[1177]: Removed session 15. Sep 6 00:18:05.513830 kubelet[1908]: E0906 00:18:05.513772 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:08.168300 systemd[1]: Started sshd@15-159.223.206.243:22-147.75.109.163:47092.service. Sep 6 00:18:08.221321 sshd[3372]: Accepted publickey for core from 147.75.109.163 port 47092 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:08.223863 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:08.231074 systemd-logind[1177]: New session 16 of user core. Sep 6 00:18:08.231748 systemd[1]: Started session-16.scope. Sep 6 00:18:08.377037 sshd[3372]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:08.383227 systemd[1]: sshd@15-159.223.206.243:22-147.75.109.163:47092.service: Deactivated successfully. Sep 6 00:18:08.384539 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:18:08.385898 systemd-logind[1177]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:18:08.389382 systemd[1]: Started sshd@16-159.223.206.243:22-147.75.109.163:47106.service. Sep 6 00:18:08.392149 systemd-logind[1177]: Removed session 16. Sep 6 00:18:08.443707 sshd[3384]: Accepted publickey for core from 147.75.109.163 port 47106 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:08.446426 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:08.452770 systemd-logind[1177]: New session 17 of user core. Sep 6 00:18:08.454218 systemd[1]: Started session-17.scope. Sep 6 00:18:08.515498 kubelet[1908]: E0906 00:18:08.514270 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:08.810107 sshd[3384]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:08.816613 systemd[1]: sshd@16-159.223.206.243:22-147.75.109.163:47106.service: Deactivated successfully. Sep 6 00:18:08.818326 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:18:08.819736 systemd-logind[1177]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:18:08.823281 systemd[1]: Started sshd@17-159.223.206.243:22-147.75.109.163:47122.service. Sep 6 00:18:08.825480 systemd-logind[1177]: Removed session 17. Sep 6 00:18:08.882582 sshd[3394]: Accepted publickey for core from 147.75.109.163 port 47122 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:08.884437 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:08.890507 systemd-logind[1177]: New session 18 of user core. Sep 6 00:18:08.890758 systemd[1]: Started session-18.scope. Sep 6 00:18:10.603083 sshd[3394]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:10.615585 systemd[1]: Started sshd@18-159.223.206.243:22-147.75.109.163:36810.service. Sep 6 00:18:10.617650 systemd[1]: sshd@17-159.223.206.243:22-147.75.109.163:47122.service: Deactivated successfully. Sep 6 00:18:10.625313 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:18:10.631251 systemd-logind[1177]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:18:10.633362 systemd-logind[1177]: Removed session 18. Sep 6 00:18:10.682938 sshd[3410]: Accepted publickey for core from 147.75.109.163 port 36810 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:10.685819 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:10.692920 systemd[1]: Started session-19.scope. Sep 6 00:18:10.694715 systemd-logind[1177]: New session 19 of user core. Sep 6 00:18:11.147262 sshd[3410]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:11.154189 systemd[1]: Started sshd@19-159.223.206.243:22-147.75.109.163:36826.service. Sep 6 00:18:11.155925 systemd[1]: sshd@18-159.223.206.243:22-147.75.109.163:36810.service: Deactivated successfully. Sep 6 00:18:11.157822 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:18:11.161409 systemd-logind[1177]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:18:11.163060 systemd-logind[1177]: Removed session 19. Sep 6 00:18:11.221610 sshd[3421]: Accepted publickey for core from 147.75.109.163 port 36826 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:11.224849 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:11.232849 systemd-logind[1177]: New session 20 of user core. Sep 6 00:18:11.233429 systemd[1]: Started session-20.scope. Sep 6 00:18:11.411061 sshd[3421]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:11.415729 systemd-logind[1177]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:18:11.416120 systemd[1]: sshd@19-159.223.206.243:22-147.75.109.163:36826.service: Deactivated successfully. Sep 6 00:18:11.416893 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:18:11.418920 systemd-logind[1177]: Removed session 20. Sep 6 00:18:16.420233 systemd[1]: Started sshd@20-159.223.206.243:22-147.75.109.163:36830.service. Sep 6 00:18:16.475096 sshd[3434]: Accepted publickey for core from 147.75.109.163 port 36830 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:16.477798 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:16.484539 systemd[1]: Started session-21.scope. Sep 6 00:18:16.485209 systemd-logind[1177]: New session 21 of user core. Sep 6 00:18:16.640726 sshd[3434]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:16.644641 systemd[1]: sshd@20-159.223.206.243:22-147.75.109.163:36830.service: Deactivated successfully. Sep 6 00:18:16.645619 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:18:16.646490 systemd-logind[1177]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:18:16.647410 systemd-logind[1177]: Removed session 21. Sep 6 00:18:19.514073 kubelet[1908]: E0906 00:18:19.514027 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:21.515805 kubelet[1908]: E0906 00:18:21.515746 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:21.651378 systemd[1]: Started sshd@21-159.223.206.243:22-147.75.109.163:41180.service. Sep 6 00:18:21.705450 sshd[3449]: Accepted publickey for core from 147.75.109.163 port 41180 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:21.707277 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:21.714224 systemd[1]: Started session-22.scope. Sep 6 00:18:21.715059 systemd-logind[1177]: New session 22 of user core. Sep 6 00:18:21.875810 sshd[3449]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:21.879875 systemd[1]: sshd@21-159.223.206.243:22-147.75.109.163:41180.service: Deactivated successfully. Sep 6 00:18:21.880744 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:18:21.882122 systemd-logind[1177]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:18:21.883218 systemd-logind[1177]: Removed session 22. Sep 6 00:18:23.515107 kubelet[1908]: E0906 00:18:23.515038 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:26.885145 systemd[1]: Started sshd@22-159.223.206.243:22-147.75.109.163:41182.service. Sep 6 00:18:26.940059 sshd[3461]: Accepted publickey for core from 147.75.109.163 port 41182 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:26.942511 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:26.950785 systemd[1]: Started session-23.scope. Sep 6 00:18:26.951144 systemd-logind[1177]: New session 23 of user core. Sep 6 00:18:27.110214 sshd[3461]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:27.114122 systemd[1]: sshd@22-159.223.206.243:22-147.75.109.163:41182.service: Deactivated successfully. Sep 6 00:18:27.115279 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:18:27.116869 systemd-logind[1177]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:18:27.118641 systemd-logind[1177]: Removed session 23. Sep 6 00:18:29.514824 kubelet[1908]: E0906 00:18:29.514784 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:32.121536 systemd[1]: Started sshd@23-159.223.206.243:22-147.75.109.163:40530.service. Sep 6 00:18:32.180013 sshd[3475]: Accepted publickey for core from 147.75.109.163 port 40530 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:32.182033 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:32.189195 systemd-logind[1177]: New session 24 of user core. Sep 6 00:18:32.189683 systemd[1]: Started session-24.scope. Sep 6 00:18:32.344717 sshd[3475]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:32.349130 systemd[1]: sshd@23-159.223.206.243:22-147.75.109.163:40530.service: Deactivated successfully. Sep 6 00:18:32.350081 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:18:32.351219 systemd-logind[1177]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:18:32.353155 systemd-logind[1177]: Removed session 24. Sep 6 00:18:37.352587 systemd[1]: Started sshd@24-159.223.206.243:22-147.75.109.163:40542.service. Sep 6 00:18:37.409219 sshd[3488]: Accepted publickey for core from 147.75.109.163 port 40542 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:37.412439 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:37.420127 systemd-logind[1177]: New session 25 of user core. Sep 6 00:18:37.420158 systemd[1]: Started session-25.scope. Sep 6 00:18:37.578217 sshd[3488]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:37.582864 systemd[1]: sshd@24-159.223.206.243:22-147.75.109.163:40542.service: Deactivated successfully. Sep 6 00:18:37.583934 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:18:37.585488 systemd-logind[1177]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:18:37.586413 systemd-logind[1177]: Removed session 25. Sep 6 00:18:41.514197 kubelet[1908]: E0906 00:18:41.514150 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:42.587700 systemd[1]: Started sshd@25-159.223.206.243:22-147.75.109.163:34806.service. Sep 6 00:18:42.642685 sshd[3500]: Accepted publickey for core from 147.75.109.163 port 34806 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:42.645524 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:42.652051 systemd[1]: Started session-26.scope. Sep 6 00:18:42.652619 systemd-logind[1177]: New session 26 of user core. Sep 6 00:18:42.798838 sshd[3500]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:42.807403 systemd[1]: sshd@25-159.223.206.243:22-147.75.109.163:34806.service: Deactivated successfully. Sep 6 00:18:42.809274 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:18:42.810425 systemd-logind[1177]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:18:42.813744 systemd[1]: Started sshd@26-159.223.206.243:22-147.75.109.163:34814.service. Sep 6 00:18:42.816159 systemd-logind[1177]: Removed session 26. Sep 6 00:18:42.870491 sshd[3512]: Accepted publickey for core from 147.75.109.163 port 34814 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:42.873082 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:42.879741 systemd[1]: Started session-27.scope. Sep 6 00:18:42.880760 systemd-logind[1177]: New session 27 of user core. Sep 6 00:18:44.391559 kubelet[1908]: I0906 00:18:44.391488 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zb662" podStartSLOduration=107.391464356 podStartE2EDuration="1m47.391464356s" podCreationTimestamp="2025-09-06 00:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:17:21.731729238 +0000 UTC m=+28.436214293" watchObservedRunningTime="2025-09-06 00:18:44.391464356 +0000 UTC m=+111.095949414" Sep 6 00:18:44.461764 env[1184]: time="2025-09-06T00:18:44.461691611Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:18:44.463846 env[1184]: time="2025-09-06T00:18:44.463782673Z" level=info msg="StopContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" with timeout 30 (s)" Sep 6 00:18:44.464308 env[1184]: time="2025-09-06T00:18:44.464207956Z" level=info msg="Stop container \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" with signal terminated" Sep 6 00:18:44.470552 env[1184]: time="2025-09-06T00:18:44.470483857Z" level=info msg="StopContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" with timeout 2 (s)" Sep 6 00:18:44.470820 env[1184]: time="2025-09-06T00:18:44.470794369Z" level=info msg="Stop container \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" with signal terminated" Sep 6 00:18:44.476859 systemd[1]: cri-containerd-58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044.scope: Deactivated successfully. Sep 6 00:18:44.486030 systemd-networkd[1003]: lxc_health: Link DOWN Sep 6 00:18:44.486038 systemd-networkd[1003]: lxc_health: Lost carrier Sep 6 00:18:44.525239 systemd[1]: cri-containerd-dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a.scope: Deactivated successfully. Sep 6 00:18:44.525523 systemd[1]: cri-containerd-dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a.scope: Consumed 8.322s CPU time. Sep 6 00:18:44.536570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044-rootfs.mount: Deactivated successfully. Sep 6 00:18:44.544573 env[1184]: time="2025-09-06T00:18:44.544519374Z" level=info msg="shim disconnected" id=58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044 Sep 6 00:18:44.544836 env[1184]: time="2025-09-06T00:18:44.544586562Z" level=warning msg="cleaning up after shim disconnected" id=58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044 namespace=k8s.io Sep 6 00:18:44.544836 env[1184]: time="2025-09-06T00:18:44.544596758Z" level=info msg="cleaning up dead shim" Sep 6 00:18:44.563656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a-rootfs.mount: Deactivated successfully. Sep 6 00:18:44.567122 env[1184]: time="2025-09-06T00:18:44.567055349Z" level=info msg="shim disconnected" id=dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a Sep 6 00:18:44.567122 env[1184]: time="2025-09-06T00:18:44.567120463Z" level=warning msg="cleaning up after shim disconnected" id=dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a namespace=k8s.io Sep 6 00:18:44.567443 env[1184]: time="2025-09-06T00:18:44.567135953Z" level=info msg="cleaning up dead shim" Sep 6 00:18:44.571387 env[1184]: time="2025-09-06T00:18:44.571324945Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3574 runtime=io.containerd.runc.v2\n" Sep 6 00:18:44.573805 env[1184]: time="2025-09-06T00:18:44.573753639Z" level=info msg="StopContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" returns successfully" Sep 6 00:18:44.574744 env[1184]: time="2025-09-06T00:18:44.574691560Z" level=info msg="StopPodSandbox for \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\"" Sep 6 00:18:44.574899 env[1184]: time="2025-09-06T00:18:44.574769300Z" level=info msg="Container to stop \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.579089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6-shm.mount: Deactivated successfully. Sep 6 00:18:44.584776 env[1184]: time="2025-09-06T00:18:44.584727237Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3592 runtime=io.containerd.runc.v2\n" Sep 6 00:18:44.587229 env[1184]: time="2025-09-06T00:18:44.587170687Z" level=info msg="StopContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" returns successfully" Sep 6 00:18:44.588221 env[1184]: time="2025-09-06T00:18:44.588173174Z" level=info msg="StopPodSandbox for \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\"" Sep 6 00:18:44.588378 env[1184]: time="2025-09-06T00:18:44.588269718Z" level=info msg="Container to stop \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.588378 env[1184]: time="2025-09-06T00:18:44.588296072Z" level=info msg="Container to stop \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.588378 env[1184]: time="2025-09-06T00:18:44.588315190Z" level=info msg="Container to stop \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.588378 env[1184]: time="2025-09-06T00:18:44.588331966Z" level=info msg="Container to stop \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.588378 env[1184]: time="2025-09-06T00:18:44.588348044Z" level=info msg="Container to stop \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:44.592963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074-shm.mount: Deactivated successfully. Sep 6 00:18:44.597888 systemd[1]: cri-containerd-5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6.scope: Deactivated successfully. Sep 6 00:18:44.611477 systemd[1]: cri-containerd-2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074.scope: Deactivated successfully. Sep 6 00:18:44.648893 env[1184]: time="2025-09-06T00:18:44.647643723Z" level=info msg="shim disconnected" id=5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6 Sep 6 00:18:44.648893 env[1184]: time="2025-09-06T00:18:44.647710031Z" level=warning msg="cleaning up after shim disconnected" id=5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6 namespace=k8s.io Sep 6 00:18:44.648893 env[1184]: time="2025-09-06T00:18:44.647722620Z" level=info msg="cleaning up dead shim" Sep 6 00:18:44.655153 env[1184]: time="2025-09-06T00:18:44.655092352Z" level=info msg="shim disconnected" id=2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074 Sep 6 00:18:44.655451 env[1184]: time="2025-09-06T00:18:44.655426128Z" level=warning msg="cleaning up after shim disconnected" id=2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074 namespace=k8s.io Sep 6 00:18:44.655556 env[1184]: time="2025-09-06T00:18:44.655539096Z" level=info msg="cleaning up dead shim" Sep 6 00:18:44.665624 env[1184]: time="2025-09-06T00:18:44.665567379Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3642 runtime=io.containerd.runc.v2\n" Sep 6 00:18:44.666025 env[1184]: time="2025-09-06T00:18:44.665911511Z" level=info msg="TearDown network for sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" successfully" Sep 6 00:18:44.666025 env[1184]: time="2025-09-06T00:18:44.665962358Z" level=info msg="StopPodSandbox for \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" returns successfully" Sep 6 00:18:44.670157 env[1184]: time="2025-09-06T00:18:44.670098543Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3647 runtime=io.containerd.runc.v2\n" Sep 6 00:18:44.670455 env[1184]: time="2025-09-06T00:18:44.670423294Z" level=info msg="TearDown network for sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" successfully" Sep 6 00:18:44.670455 env[1184]: time="2025-09-06T00:18:44.670457457Z" level=info msg="StopPodSandbox for \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" returns successfully" Sep 6 00:18:44.718231 kubelet[1908]: I0906 00:18:44.718135 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cni-path\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718231 kubelet[1908]: I0906 00:18:44.718187 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-net\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718231 kubelet[1908]: I0906 00:18:44.718214 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkl5\" (UniqueName: \"kubernetes.io/projected/49084775-6173-4013-936a-32c631ffc705-kube-api-access-mzkl5\") pod \"49084775-6173-4013-936a-32c631ffc705\" (UID: \"49084775-6173-4013-936a-32c631ffc705\") " Sep 6 00:18:44.718231 kubelet[1908]: I0906 00:18:44.718237 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-kernel\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718231 kubelet[1908]: I0906 00:18:44.718253 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-bpf-maps\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718267 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-lib-modules\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718283 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hubble-tls\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718300 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-config-path\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718314 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hostproc\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718328 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-cgroup\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718575 kubelet[1908]: I0906 00:18:44.718348 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m7mr\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-kube-api-access-5m7mr\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718815 kubelet[1908]: I0906 00:18:44.718363 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-run\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718815 kubelet[1908]: I0906 00:18:44.718377 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-etc-cni-netd\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718815 kubelet[1908]: I0906 00:18:44.718396 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-clustermesh-secrets\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718815 kubelet[1908]: I0906 00:18:44.718413 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-xtables-lock\") pod \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\" (UID: \"94312cb6-d25e-4877-8fe1-b9c714d1f2c0\") " Sep 6 00:18:44.718815 kubelet[1908]: I0906 00:18:44.718435 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49084775-6173-4013-936a-32c631ffc705-cilium-config-path\") pod \"49084775-6173-4013-936a-32c631ffc705\" (UID: \"49084775-6173-4013-936a-32c631ffc705\") " Sep 6 00:18:44.727169 kubelet[1908]: I0906 00:18:44.724965 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.731726 kubelet[1908]: I0906 00:18:44.727127 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738025 kubelet[1908]: I0906 00:18:44.727154 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738025 kubelet[1908]: I0906 00:18:44.727410 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738025 kubelet[1908]: I0906 00:18:44.730379 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738025 kubelet[1908]: I0906 00:18:44.730429 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738025 kubelet[1908]: I0906 00:18:44.730520 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738424 kubelet[1908]: I0906 00:18:44.730571 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738424 kubelet[1908]: I0906 00:18:44.730597 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738424 kubelet[1908]: I0906 00:18:44.725474 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49084775-6173-4013-936a-32c631ffc705-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49084775-6173-4013-936a-32c631ffc705" (UID: "49084775-6173-4013-936a-32c631ffc705"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:18:44.738424 kubelet[1908]: I0906 00:18:44.734238 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:44.738424 kubelet[1908]: I0906 00:18:44.735707 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:18:44.738600 kubelet[1908]: I0906 00:18:44.735854 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49084775-6173-4013-936a-32c631ffc705-kube-api-access-mzkl5" (OuterVolumeSpecName: "kube-api-access-mzkl5") pod "49084775-6173-4013-936a-32c631ffc705" (UID: "49084775-6173-4013-936a-32c631ffc705"). InnerVolumeSpecName "kube-api-access-mzkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:18:44.738600 kubelet[1908]: I0906 00:18:44.737725 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:18:44.738600 kubelet[1908]: I0906 00:18:44.737915 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-kube-api-access-5m7mr" (OuterVolumeSpecName: "kube-api-access-5m7mr") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "kube-api-access-5m7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:18:44.743042 kubelet[1908]: I0906 00:18:44.742357 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94312cb6-d25e-4877-8fe1-b9c714d1f2c0" (UID: "94312cb6-d25e-4877-8fe1-b9c714d1f2c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:18:44.819554 kubelet[1908]: I0906 00:18:44.819501 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49084775-6173-4013-936a-32c631ffc705-cilium-config-path\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.819873 kubelet[1908]: I0906 00:18:44.819853 1908 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cni-path\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820000 kubelet[1908]: I0906 00:18:44.819954 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-net\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820087 kubelet[1908]: I0906 00:18:44.820073 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzkl5\" (UniqueName: \"kubernetes.io/projected/49084775-6173-4013-936a-32c631ffc705-kube-api-access-mzkl5\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820200 kubelet[1908]: I0906 00:18:44.820187 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820290 kubelet[1908]: I0906 00:18:44.820278 1908 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-bpf-maps\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820394 kubelet[1908]: I0906 00:18:44.820382 1908 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-lib-modules\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820477 kubelet[1908]: I0906 00:18:44.820466 1908 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hubble-tls\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820551 kubelet[1908]: I0906 00:18:44.820540 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-config-path\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820623 kubelet[1908]: I0906 00:18:44.820612 1908 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-hostproc\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820708 kubelet[1908]: I0906 00:18:44.820696 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-cgroup\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820795 kubelet[1908]: I0906 00:18:44.820784 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m7mr\" (UniqueName: \"kubernetes.io/projected/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-kube-api-access-5m7mr\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820866 kubelet[1908]: I0906 00:18:44.820855 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-cilium-run\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.820931 kubelet[1908]: I0906 00:18:44.820921 1908 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-etc-cni-netd\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.821021 kubelet[1908]: I0906 00:18:44.821010 1908 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-xtables-lock\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.821109 kubelet[1908]: I0906 00:18:44.821098 1908 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94312cb6-d25e-4877-8fe1-b9c714d1f2c0-clustermesh-secrets\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:44.868433 kubelet[1908]: I0906 00:18:44.868369 1908 scope.go:117] "RemoveContainer" containerID="dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a" Sep 6 00:18:44.872074 systemd[1]: Removed slice kubepods-burstable-pod94312cb6_d25e_4877_8fe1_b9c714d1f2c0.slice. Sep 6 00:18:44.872177 systemd[1]: kubepods-burstable-pod94312cb6_d25e_4877_8fe1_b9c714d1f2c0.slice: Consumed 8.472s CPU time. Sep 6 00:18:44.876586 env[1184]: time="2025-09-06T00:18:44.876519627Z" level=info msg="RemoveContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\"" Sep 6 00:18:44.883889 env[1184]: time="2025-09-06T00:18:44.883557277Z" level=info msg="RemoveContainer for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" returns successfully" Sep 6 00:18:44.884343 kubelet[1908]: I0906 00:18:44.884298 1908 scope.go:117] "RemoveContainer" containerID="186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1" Sep 6 00:18:44.887607 systemd[1]: Removed slice kubepods-besteffort-pod49084775_6173_4013_936a_32c631ffc705.slice. Sep 6 00:18:44.895382 env[1184]: time="2025-09-06T00:18:44.894112187Z" level=info msg="RemoveContainer for \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\"" Sep 6 00:18:44.898304 env[1184]: time="2025-09-06T00:18:44.898157803Z" level=info msg="RemoveContainer for \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\" returns successfully" Sep 6 00:18:44.899243 kubelet[1908]: I0906 00:18:44.898960 1908 scope.go:117] "RemoveContainer" containerID="035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b" Sep 6 00:18:44.910262 env[1184]: time="2025-09-06T00:18:44.909726124Z" level=info msg="RemoveContainer for \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\"" Sep 6 00:18:44.912406 env[1184]: time="2025-09-06T00:18:44.912357510Z" level=info msg="RemoveContainer for \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\" returns successfully" Sep 6 00:18:44.912719 kubelet[1908]: I0906 00:18:44.912671 1908 scope.go:117] "RemoveContainer" containerID="acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944" Sep 6 00:18:44.914863 env[1184]: time="2025-09-06T00:18:44.914499785Z" level=info msg="RemoveContainer for \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\"" Sep 6 00:18:44.917473 env[1184]: time="2025-09-06T00:18:44.917410266Z" level=info msg="RemoveContainer for \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\" returns successfully" Sep 6 00:18:44.918130 kubelet[1908]: I0906 00:18:44.918098 1908 scope.go:117] "RemoveContainer" containerID="2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a" Sep 6 00:18:44.919794 env[1184]: time="2025-09-06T00:18:44.919740602Z" level=info msg="RemoveContainer for \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\"" Sep 6 00:18:44.923378 env[1184]: time="2025-09-06T00:18:44.923323914Z" level=info msg="RemoveContainer for \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\" returns successfully" Sep 6 00:18:44.926729 kubelet[1908]: I0906 00:18:44.926664 1908 scope.go:117] "RemoveContainer" containerID="dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a" Sep 6 00:18:44.928499 env[1184]: time="2025-09-06T00:18:44.928409652Z" level=error msg="ContainerStatus for \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\": not found" Sep 6 00:18:44.930925 kubelet[1908]: E0906 00:18:44.930883 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\": not found" containerID="dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a" Sep 6 00:18:44.931263 kubelet[1908]: I0906 00:18:44.931160 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a"} err="failed to get container status \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dde5e25201dbf30f6b4c06776c7a3a94091dd6cb676540dc90aad1899443a81a\": not found" Sep 6 00:18:44.931422 kubelet[1908]: I0906 00:18:44.931403 1908 scope.go:117] "RemoveContainer" containerID="186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1" Sep 6 00:18:44.932159 env[1184]: time="2025-09-06T00:18:44.932083155Z" level=error msg="ContainerStatus for \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\": not found" Sep 6 00:18:44.932487 kubelet[1908]: E0906 00:18:44.932461 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\": not found" containerID="186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1" Sep 6 00:18:44.932710 kubelet[1908]: I0906 00:18:44.932491 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1"} err="failed to get container status \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"186d66ce8c23cabaff2686ae8e8b2820554dcf2baf825287358700f5c0feefc1\": not found" Sep 6 00:18:44.932710 kubelet[1908]: I0906 00:18:44.932526 1908 scope.go:117] "RemoveContainer" containerID="035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b" Sep 6 00:18:44.932995 env[1184]: time="2025-09-06T00:18:44.932921770Z" level=error msg="ContainerStatus for \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\": not found" Sep 6 00:18:44.933233 kubelet[1908]: E0906 00:18:44.933210 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\": not found" containerID="035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b" Sep 6 00:18:44.934655 kubelet[1908]: I0906 00:18:44.934623 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b"} err="failed to get container status \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"035662ce6e584f86adff21b01e6cfb1d67da2956bde081a84bc83b7b540bad5b\": not found" Sep 6 00:18:44.934775 kubelet[1908]: I0906 00:18:44.934759 1908 scope.go:117] "RemoveContainer" containerID="acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944" Sep 6 00:18:44.935682 env[1184]: time="2025-09-06T00:18:44.935605850Z" level=error msg="ContainerStatus for \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\": not found" Sep 6 00:18:44.936121 kubelet[1908]: E0906 00:18:44.936097 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\": not found" containerID="acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944" Sep 6 00:18:44.936197 kubelet[1908]: I0906 00:18:44.936123 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944"} err="failed to get container status \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\": rpc error: code = NotFound desc = an error occurred when try to find container \"acd75eb7eca0411031c96202fd738614d8a69295bfcbfe4ca049a515bdeaa944\": not found" Sep 6 00:18:44.936197 kubelet[1908]: I0906 00:18:44.936142 1908 scope.go:117] "RemoveContainer" containerID="2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a" Sep 6 00:18:44.936854 env[1184]: time="2025-09-06T00:18:44.936795020Z" level=error msg="ContainerStatus for \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\": not found" Sep 6 00:18:44.939690 kubelet[1908]: E0906 00:18:44.939647 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\": not found" containerID="2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a" Sep 6 00:18:44.939808 kubelet[1908]: I0906 00:18:44.939692 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a"} err="failed to get container status \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2427ffa05d8074b8d7a7bb57058289c24f66a660bec05a50f14d44d5516fee3a\": not found" Sep 6 00:18:44.939808 kubelet[1908]: I0906 00:18:44.939728 1908 scope.go:117] "RemoveContainer" containerID="58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044" Sep 6 00:18:44.945580 env[1184]: time="2025-09-06T00:18:44.945520019Z" level=info msg="RemoveContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\"" Sep 6 00:18:44.949202 env[1184]: time="2025-09-06T00:18:44.949154723Z" level=info msg="RemoveContainer for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" returns successfully" Sep 6 00:18:44.949612 kubelet[1908]: I0906 00:18:44.949580 1908 scope.go:117] "RemoveContainer" containerID="58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044" Sep 6 00:18:44.949952 env[1184]: time="2025-09-06T00:18:44.949877599Z" level=error msg="ContainerStatus for \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\": not found" Sep 6 00:18:44.950167 kubelet[1908]: E0906 00:18:44.950129 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\": not found" containerID="58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044" Sep 6 00:18:44.950226 kubelet[1908]: I0906 00:18:44.950175 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044"} err="failed to get container status \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\": rpc error: code = NotFound desc = an error occurred when try to find container \"58db75a5fc7e012e38adbcc536557ae12820d6d7d8fea1236bda49abd8aaf044\": not found" Sep 6 00:18:45.426701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6-rootfs.mount: Deactivated successfully. Sep 6 00:18:45.426817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074-rootfs.mount: Deactivated successfully. Sep 6 00:18:45.426875 systemd[1]: var-lib-kubelet-pods-94312cb6\x2dd25e\x2d4877\x2d8fe1\x2db9c714d1f2c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5m7mr.mount: Deactivated successfully. Sep 6 00:18:45.426933 systemd[1]: var-lib-kubelet-pods-49084775\x2d6173\x2d4013\x2d936a\x2d32c631ffc705-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzkl5.mount: Deactivated successfully. Sep 6 00:18:45.427031 systemd[1]: var-lib-kubelet-pods-94312cb6\x2dd25e\x2d4877\x2d8fe1\x2db9c714d1f2c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:18:45.427095 systemd[1]: var-lib-kubelet-pods-94312cb6\x2dd25e\x2d4877\x2d8fe1\x2db9c714d1f2c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:45.515555 kubelet[1908]: I0906 00:18:45.515492 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49084775-6173-4013-936a-32c631ffc705" path="/var/lib/kubelet/pods/49084775-6173-4013-936a-32c631ffc705/volumes" Sep 6 00:18:45.516078 kubelet[1908]: I0906 00:18:45.516032 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" path="/var/lib/kubelet/pods/94312cb6-d25e-4877-8fe1-b9c714d1f2c0/volumes" Sep 6 00:18:46.302583 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:46.311178 systemd[1]: Started sshd@27-159.223.206.243:22-147.75.109.163:34828.service. Sep 6 00:18:46.313194 systemd[1]: sshd@26-159.223.206.243:22-147.75.109.163:34814.service: Deactivated successfully. Sep 6 00:18:46.316237 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:18:46.319197 systemd-logind[1177]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:18:46.324055 systemd-logind[1177]: Removed session 27. Sep 6 00:18:46.380244 sshd[3676]: Accepted publickey for core from 147.75.109.163 port 34828 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:46.382907 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:46.389916 systemd-logind[1177]: New session 28 of user core. Sep 6 00:18:46.390188 systemd[1]: Started session-28.scope. Sep 6 00:18:46.513993 kubelet[1908]: E0906 00:18:46.513937 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:47.039919 sshd[3676]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:47.045895 systemd[1]: Started sshd@28-159.223.206.243:22-147.75.109.163:34832.service. Sep 6 00:18:47.055318 systemd[1]: sshd@27-159.223.206.243:22-147.75.109.163:34828.service: Deactivated successfully. Sep 6 00:18:47.056300 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 00:18:47.058010 systemd-logind[1177]: Session 28 logged out. Waiting for processes to exit. Sep 6 00:18:47.060206 systemd-logind[1177]: Removed session 28. Sep 6 00:18:47.097450 kubelet[1908]: E0906 00:18:47.097403 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="clean-cilium-state" Sep 6 00:18:47.097450 kubelet[1908]: E0906 00:18:47.097462 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="cilium-agent" Sep 6 00:18:47.097900 kubelet[1908]: E0906 00:18:47.097472 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="mount-cgroup" Sep 6 00:18:47.097900 kubelet[1908]: E0906 00:18:47.097481 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="apply-sysctl-overwrites" Sep 6 00:18:47.097900 kubelet[1908]: E0906 00:18:47.097492 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="mount-bpf-fs" Sep 6 00:18:47.097900 kubelet[1908]: E0906 00:18:47.097499 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49084775-6173-4013-936a-32c631ffc705" containerName="cilium-operator" Sep 6 00:18:47.097900 kubelet[1908]: I0906 00:18:47.097553 1908 memory_manager.go:354] "RemoveStaleState removing state" podUID="94312cb6-d25e-4877-8fe1-b9c714d1f2c0" containerName="cilium-agent" Sep 6 00:18:47.097900 kubelet[1908]: I0906 00:18:47.097560 1908 memory_manager.go:354] "RemoveStaleState removing state" podUID="49084775-6173-4013-936a-32c631ffc705" containerName="cilium-operator" Sep 6 00:18:47.101057 sshd[3687]: Accepted publickey for core from 147.75.109.163 port 34832 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:47.103374 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:47.116469 systemd[1]: Started session-29.scope. Sep 6 00:18:47.117838 systemd-logind[1177]: New session 29 of user core. Sep 6 00:18:47.151006 kubelet[1908]: I0906 00:18:47.145882 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-xtables-lock\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152177 kubelet[1908]: I0906 00:18:47.152140 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-clustermesh-secrets\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152381 kubelet[1908]: I0906 00:18:47.152359 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cni-path\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152461 kubelet[1908]: I0906 00:18:47.152447 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-ipsec-secrets\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152553 kubelet[1908]: I0906 00:18:47.152535 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-run\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152637 kubelet[1908]: I0906 00:18:47.152624 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hostproc\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152709 kubelet[1908]: I0906 00:18:47.152696 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-kernel\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152788 kubelet[1908]: I0906 00:18:47.152775 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-net\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.152860 kubelet[1908]: I0906 00:18:47.152845 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hubble-tls\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153003 kubelet[1908]: I0906 00:18:47.152985 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-cgroup\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153152 kubelet[1908]: I0906 00:18:47.153137 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-lib-modules\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153234 kubelet[1908]: I0906 00:18:47.153220 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-config-path\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153325 kubelet[1908]: I0906 00:18:47.153312 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsk6c\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-kube-api-access-rsk6c\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153403 kubelet[1908]: I0906 00:18:47.153390 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-bpf-maps\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.153485 kubelet[1908]: I0906 00:18:47.153463 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-etc-cni-netd\") pod \"cilium-56p4h\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " pod="kube-system/cilium-56p4h" Sep 6 00:18:47.158141 systemd[1]: Created slice kubepods-burstable-pod88cd0e59_2657_40ff_84a7_3a88c57ea8ea.slice. Sep 6 00:18:47.431657 sshd[3687]: pam_unix(sshd:session): session closed for user core Sep 6 00:18:47.437831 systemd[1]: sshd@28-159.223.206.243:22-147.75.109.163:34832.service: Deactivated successfully. Sep 6 00:18:47.438910 systemd[1]: session-29.scope: Deactivated successfully. Sep 6 00:18:47.440026 systemd-logind[1177]: Session 29 logged out. Waiting for processes to exit. Sep 6 00:18:47.442031 systemd[1]: Started sshd@29-159.223.206.243:22-147.75.109.163:34838.service. Sep 6 00:18:47.448253 systemd-logind[1177]: Removed session 29. Sep 6 00:18:47.464066 kubelet[1908]: E0906 00:18:47.463749 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:47.465442 env[1184]: time="2025-09-06T00:18:47.465392594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-56p4h,Uid:88cd0e59-2657-40ff-84a7-3a88c57ea8ea,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:47.491312 env[1184]: time="2025-09-06T00:18:47.491201593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:47.491624 env[1184]: time="2025-09-06T00:18:47.491560550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:47.491773 env[1184]: time="2025-09-06T00:18:47.491741417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:47.493305 env[1184]: time="2025-09-06T00:18:47.493065907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84 pid=3715 runtime=io.containerd.runc.v2 Sep 6 00:18:47.513311 systemd[1]: Started cri-containerd-8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84.scope. Sep 6 00:18:47.521014 sshd[3705]: Accepted publickey for core from 147.75.109.163 port 34838 ssh2: RSA SHA256:zgVES46caP1+99uzHYMS+9ry3WhXasb4NYAgm1B5TPc Sep 6 00:18:47.524007 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:18:47.536035 systemd-logind[1177]: New session 30 of user core. Sep 6 00:18:47.536276 systemd[1]: Started session-30.scope. Sep 6 00:18:47.583920 env[1184]: time="2025-09-06T00:18:47.583871043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-56p4h,Uid:88cd0e59-2657-40ff-84a7-3a88c57ea8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\"" Sep 6 00:18:47.592152 kubelet[1908]: E0906 00:18:47.586698 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:47.597426 env[1184]: time="2025-09-06T00:18:47.597376600Z" level=info msg="CreateContainer within sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:18:47.619154 env[1184]: time="2025-09-06T00:18:47.618961924Z" level=info msg="CreateContainer within sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\"" Sep 6 00:18:47.620217 env[1184]: time="2025-09-06T00:18:47.620179036Z" level=info msg="StartContainer for \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\"" Sep 6 00:18:47.651524 systemd[1]: Started cri-containerd-83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02.scope. Sep 6 00:18:47.674288 systemd[1]: cri-containerd-83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02.scope: Deactivated successfully. Sep 6 00:18:47.674503 systemd[1]: Stopped cri-containerd-83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02.scope. Sep 6 00:18:47.688594 env[1184]: time="2025-09-06T00:18:47.687405088Z" level=info msg="shim disconnected" id=83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02 Sep 6 00:18:47.688594 env[1184]: time="2025-09-06T00:18:47.687472890Z" level=warning msg="cleaning up after shim disconnected" id=83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02 namespace=k8s.io Sep 6 00:18:47.688594 env[1184]: time="2025-09-06T00:18:47.687487850Z" level=info msg="cleaning up dead shim" Sep 6 00:18:47.707162 env[1184]: time="2025-09-06T00:18:47.707073372Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:18:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:18:47.713350 env[1184]: time="2025-09-06T00:18:47.713178799Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Sep 6 00:18:47.716497 env[1184]: time="2025-09-06T00:18:47.714069451Z" level=error msg="Failed to pipe stdout of container \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\"" error="reading from a closed fifo" Sep 6 00:18:47.716738 env[1184]: time="2025-09-06T00:18:47.716152191Z" level=error msg="Failed to pipe stderr of container \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\"" error="reading from a closed fifo" Sep 6 00:18:47.718668 env[1184]: time="2025-09-06T00:18:47.718586035Z" level=error msg="StartContainer for \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:18:47.719481 kubelet[1908]: E0906 00:18:47.719431 1908 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02" Sep 6 00:18:47.724895 kubelet[1908]: E0906 00:18:47.724818 1908 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:18:47.724895 kubelet[1908]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:18:47.724895 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:18:47.724895 kubelet[1908]: rm /hostbin/cilium-mount Sep 6 00:18:47.725174 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsk6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-56p4h_kube-system(88cd0e59-2657-40ff-84a7-3a88c57ea8ea): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:18:47.725174 kubelet[1908]: > logger="UnhandledError" Sep 6 00:18:47.726100 kubelet[1908]: E0906 00:18:47.726049 1908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-56p4h" podUID="88cd0e59-2657-40ff-84a7-3a88c57ea8ea" Sep 6 00:18:47.887188 env[1184]: time="2025-09-06T00:18:47.887142908Z" level=info msg="StopPodSandbox for \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\"" Sep 6 00:18:47.887448 env[1184]: time="2025-09-06T00:18:47.887409319Z" level=info msg="Container to stop \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:18:47.901850 systemd[1]: cri-containerd-8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84.scope: Deactivated successfully. Sep 6 00:18:47.935429 env[1184]: time="2025-09-06T00:18:47.935362407Z" level=info msg="shim disconnected" id=8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84 Sep 6 00:18:47.935429 env[1184]: time="2025-09-06T00:18:47.935423706Z" level=warning msg="cleaning up after shim disconnected" id=8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84 namespace=k8s.io Sep 6 00:18:47.935429 env[1184]: time="2025-09-06T00:18:47.935434715Z" level=info msg="cleaning up dead shim" Sep 6 00:18:47.948564 env[1184]: time="2025-09-06T00:18:47.947260547Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3814 runtime=io.containerd.runc.v2\n" Sep 6 00:18:47.949164 env[1184]: time="2025-09-06T00:18:47.949119273Z" level=info msg="TearDown network for sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" successfully" Sep 6 00:18:47.949382 env[1184]: time="2025-09-06T00:18:47.949348723Z" level=info msg="StopPodSandbox for \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" returns successfully" Sep 6 00:18:48.067314 kubelet[1908]: I0906 00:18:48.067245 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-clustermesh-secrets\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.067771 kubelet[1908]: I0906 00:18:48.067736 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-ipsec-secrets\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.067903 kubelet[1908]: I0906 00:18:48.067887 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-run\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068032 kubelet[1908]: I0906 00:18:48.068018 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-kernel\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068174 kubelet[1908]: I0906 00:18:48.068158 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hubble-tls\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068260 kubelet[1908]: I0906 00:18:48.068244 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hostproc\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068335 kubelet[1908]: I0906 00:18:48.068322 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-lib-modules\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068420 kubelet[1908]: I0906 00:18:48.068407 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-config-path\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068517 kubelet[1908]: I0906 00:18:48.068501 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-bpf-maps\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068654 kubelet[1908]: I0906 00:18:48.068638 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-xtables-lock\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068744 kubelet[1908]: I0906 00:18:48.068730 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-etc-cni-netd\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068821 kubelet[1908]: I0906 00:18:48.068808 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cni-path\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068897 kubelet[1908]: I0906 00:18:48.068885 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-net\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.068968 kubelet[1908]: I0906 00:18:48.068956 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-cgroup\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.069076 kubelet[1908]: I0906 00:18:48.069058 1908 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsk6c\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-kube-api-access-rsk6c\") pod \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\" (UID: \"88cd0e59-2657-40ff-84a7-3a88c57ea8ea\") " Sep 6 00:18:48.069807 kubelet[1908]: I0906 00:18:48.069776 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.070036 kubelet[1908]: I0906 00:18:48.069933 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.073182 kubelet[1908]: I0906 00:18:48.073130 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:18:48.073346 kubelet[1908]: I0906 00:18:48.073202 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.073346 kubelet[1908]: I0906 00:18:48.073223 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.073346 kubelet[1908]: I0906 00:18:48.073237 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.076354 kubelet[1908]: I0906 00:18:48.076253 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:18:48.076560 kubelet[1908]: I0906 00:18:48.076427 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.076560 kubelet[1908]: I0906 00:18:48.076542 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-kube-api-access-rsk6c" (OuterVolumeSpecName: "kube-api-access-rsk6c") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "kube-api-access-rsk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:18:48.076674 kubelet[1908]: I0906 00:18:48.076577 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.076674 kubelet[1908]: I0906 00:18:48.076613 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.076674 kubelet[1908]: I0906 00:18:48.076636 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.076674 kubelet[1908]: I0906 00:18:48.076658 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:18:48.079815 kubelet[1908]: I0906 00:18:48.079716 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:18:48.084259 kubelet[1908]: I0906 00:18:48.084141 1908 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "88cd0e59-2657-40ff-84a7-3a88c57ea8ea" (UID: "88cd0e59-2657-40ff-84a7-3a88c57ea8ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:18:48.169535 kubelet[1908]: I0906 00:18:48.169492 1908 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-xtables-lock\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170109 kubelet[1908]: I0906 00:18:48.170082 1908 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-etc-cni-netd\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170230 kubelet[1908]: I0906 00:18:48.170216 1908 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cni-path\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170322 kubelet[1908]: I0906 00:18:48.170302 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-net\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170420 kubelet[1908]: I0906 00:18:48.170403 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-cgroup\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170492 kubelet[1908]: I0906 00:18:48.170480 1908 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsk6c\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-kube-api-access-rsk6c\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170575 kubelet[1908]: I0906 00:18:48.170558 1908 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-clustermesh-secrets\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170677 kubelet[1908]: I0906 00:18:48.170658 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170775 kubelet[1908]: I0906 00:18:48.170757 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-run\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170869 kubelet[1908]: I0906 00:18:48.170854 1908 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.170948 kubelet[1908]: I0906 00:18:48.170931 1908 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hubble-tls\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.171038 kubelet[1908]: I0906 00:18:48.171025 1908 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-hostproc\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.171135 kubelet[1908]: I0906 00:18:48.171118 1908 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-lib-modules\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.171225 kubelet[1908]: I0906 00:18:48.171212 1908 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-cilium-config-path\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.171294 kubelet[1908]: I0906 00:18:48.171282 1908 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88cd0e59-2657-40ff-84a7-3a88c57ea8ea-bpf-maps\") on node \"ci-3510.3.8-n-f21ba72e96\" DevicePath \"\"" Sep 6 00:18:48.278025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84-shm.mount: Deactivated successfully. Sep 6 00:18:48.278490 systemd[1]: var-lib-kubelet-pods-88cd0e59\x2d2657\x2d40ff\x2d84a7\x2d3a88c57ea8ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drsk6c.mount: Deactivated successfully. Sep 6 00:18:48.278556 systemd[1]: var-lib-kubelet-pods-88cd0e59\x2d2657\x2d40ff\x2d84a7\x2d3a88c57ea8ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:48.278617 systemd[1]: var-lib-kubelet-pods-88cd0e59\x2d2657\x2d40ff\x2d84a7\x2d3a88c57ea8ea-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:18:48.278674 systemd[1]: var-lib-kubelet-pods-88cd0e59\x2d2657\x2d40ff\x2d84a7\x2d3a88c57ea8ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:18:48.657775 kubelet[1908]: E0906 00:18:48.657717 1908 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:18:48.890154 kubelet[1908]: I0906 00:18:48.890124 1908 scope.go:117] "RemoveContainer" containerID="83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02" Sep 6 00:18:48.894432 systemd[1]: Removed slice kubepods-burstable-pod88cd0e59_2657_40ff_84a7_3a88c57ea8ea.slice. Sep 6 00:18:48.898327 env[1184]: time="2025-09-06T00:18:48.898115548Z" level=info msg="RemoveContainer for \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\"" Sep 6 00:18:48.901180 env[1184]: time="2025-09-06T00:18:48.901126666Z" level=info msg="RemoveContainer for \"83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02\" returns successfully" Sep 6 00:18:48.974204 kubelet[1908]: E0906 00:18:48.974049 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88cd0e59-2657-40ff-84a7-3a88c57ea8ea" containerName="mount-cgroup" Sep 6 00:18:48.974204 kubelet[1908]: I0906 00:18:48.974121 1908 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cd0e59-2657-40ff-84a7-3a88c57ea8ea" containerName="mount-cgroup" Sep 6 00:18:48.983575 systemd[1]: Created slice kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice. Sep 6 00:18:49.078570 kubelet[1908]: I0906 00:18:49.078522 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-host-proc-sys-kernel\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.078824 kubelet[1908]: I0906 00:18:49.078800 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4395e00-c967-43ff-8eaa-06f9bea276e6-hubble-tls\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.078923 kubelet[1908]: I0906 00:18:49.078909 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-bpf-maps\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079145 kubelet[1908]: I0906 00:18:49.079128 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-cni-path\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079263 kubelet[1908]: I0906 00:18:49.079248 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-lib-modules\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079356 kubelet[1908]: I0906 00:18:49.079342 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-cilium-cgroup\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079439 kubelet[1908]: I0906 00:18:49.079425 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4395e00-c967-43ff-8eaa-06f9bea276e6-clustermesh-secrets\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079530 kubelet[1908]: I0906 00:18:49.079514 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4395e00-c967-43ff-8eaa-06f9bea276e6-cilium-config-path\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079714 kubelet[1908]: I0906 00:18:49.079694 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-hostproc\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079822 kubelet[1908]: I0906 00:18:49.079806 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gg9p\" (UniqueName: \"kubernetes.io/projected/b4395e00-c967-43ff-8eaa-06f9bea276e6-kube-api-access-8gg9p\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.079914 kubelet[1908]: I0906 00:18:49.079900 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-xtables-lock\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.080009 kubelet[1908]: I0906 00:18:49.079996 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-cilium-run\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.080107 kubelet[1908]: I0906 00:18:49.080093 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-host-proc-sys-net\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.080204 kubelet[1908]: I0906 00:18:49.080189 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4395e00-c967-43ff-8eaa-06f9bea276e6-etc-cni-netd\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.080300 kubelet[1908]: I0906 00:18:49.080269 1908 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4395e00-c967-43ff-8eaa-06f9bea276e6-cilium-ipsec-secrets\") pod \"cilium-qbqdq\" (UID: \"b4395e00-c967-43ff-8eaa-06f9bea276e6\") " pod="kube-system/cilium-qbqdq" Sep 6 00:18:49.291072 kubelet[1908]: E0906 00:18:49.290905 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:49.293295 env[1184]: time="2025-09-06T00:18:49.292698937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbqdq,Uid:b4395e00-c967-43ff-8eaa-06f9bea276e6,Namespace:kube-system,Attempt:0,}" Sep 6 00:18:49.310059 env[1184]: time="2025-09-06T00:18:49.309764259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:18:49.310059 env[1184]: time="2025-09-06T00:18:49.309810385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:18:49.310059 env[1184]: time="2025-09-06T00:18:49.309826951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:18:49.310384 env[1184]: time="2025-09-06T00:18:49.310101058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62 pid=3843 runtime=io.containerd.runc.v2 Sep 6 00:18:49.329277 systemd[1]: Started cri-containerd-a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62.scope. Sep 6 00:18:49.373816 env[1184]: time="2025-09-06T00:18:49.373761050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbqdq,Uid:b4395e00-c967-43ff-8eaa-06f9bea276e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\"" Sep 6 00:18:49.375031 kubelet[1908]: E0906 00:18:49.374969 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:49.380354 env[1184]: time="2025-09-06T00:18:49.380308010Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:18:49.392470 env[1184]: time="2025-09-06T00:18:49.392405042Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9\"" Sep 6 00:18:49.397637 env[1184]: time="2025-09-06T00:18:49.397596488Z" level=info msg="StartContainer for \"3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9\"" Sep 6 00:18:49.418181 systemd[1]: Started cri-containerd-3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9.scope. Sep 6 00:18:49.462715 env[1184]: time="2025-09-06T00:18:49.462663560Z" level=info msg="StartContainer for \"3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9\" returns successfully" Sep 6 00:18:49.479040 systemd[1]: cri-containerd-3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9.scope: Deactivated successfully. Sep 6 00:18:49.509443 env[1184]: time="2025-09-06T00:18:49.509389551Z" level=info msg="shim disconnected" id=3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9 Sep 6 00:18:49.509784 env[1184]: time="2025-09-06T00:18:49.509756834Z" level=warning msg="cleaning up after shim disconnected" id=3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9 namespace=k8s.io Sep 6 00:18:49.509871 env[1184]: time="2025-09-06T00:18:49.509857678Z" level=info msg="cleaning up dead shim" Sep 6 00:18:49.517100 kubelet[1908]: I0906 00:18:49.516680 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cd0e59-2657-40ff-84a7-3a88c57ea8ea" path="/var/lib/kubelet/pods/88cd0e59-2657-40ff-84a7-3a88c57ea8ea/volumes" Sep 6 00:18:49.527308 env[1184]: time="2025-09-06T00:18:49.527250235Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3928 runtime=io.containerd.runc.v2\n" Sep 6 00:18:49.896131 kubelet[1908]: E0906 00:18:49.896093 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:49.898828 env[1184]: time="2025-09-06T00:18:49.898773219Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:18:49.911535 env[1184]: time="2025-09-06T00:18:49.911467317Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9\"" Sep 6 00:18:49.912378 env[1184]: time="2025-09-06T00:18:49.912334831Z" level=info msg="StartContainer for \"b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9\"" Sep 6 00:18:49.944851 systemd[1]: Started cri-containerd-b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9.scope. Sep 6 00:18:49.988843 env[1184]: time="2025-09-06T00:18:49.988717395Z" level=info msg="StartContainer for \"b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9\" returns successfully" Sep 6 00:18:50.001662 systemd[1]: cri-containerd-b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9.scope: Deactivated successfully. Sep 6 00:18:50.031278 env[1184]: time="2025-09-06T00:18:50.031228885Z" level=info msg="shim disconnected" id=b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9 Sep 6 00:18:50.031702 env[1184]: time="2025-09-06T00:18:50.031667629Z" level=warning msg="cleaning up after shim disconnected" id=b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9 namespace=k8s.io Sep 6 00:18:50.031839 env[1184]: time="2025-09-06T00:18:50.031815368Z" level=info msg="cleaning up dead shim" Sep 6 00:18:50.043942 env[1184]: time="2025-09-06T00:18:50.043871048Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" Sep 6 00:18:50.817227 kubelet[1908]: W0906 00:18:50.817159 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88cd0e59_2657_40ff_84a7_3a88c57ea8ea.slice/cri-containerd-83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02.scope WatchSource:0}: container "83136da07924c85e77f38eac2d9fb8361f5741702a375b34632fc2da590dbf02" in namespace "k8s.io": not found Sep 6 00:18:50.912169 kubelet[1908]: E0906 00:18:50.912121 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:50.918079 env[1184]: time="2025-09-06T00:18:50.918034993Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:18:50.934335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272517628.mount: Deactivated successfully. Sep 6 00:18:50.947830 env[1184]: time="2025-09-06T00:18:50.947755954Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa\"" Sep 6 00:18:50.948987 env[1184]: time="2025-09-06T00:18:50.948938298Z" level=info msg="StartContainer for \"4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa\"" Sep 6 00:18:50.983361 systemd[1]: Started cri-containerd-4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa.scope. Sep 6 00:18:51.036187 env[1184]: time="2025-09-06T00:18:51.036129027Z" level=info msg="StartContainer for \"4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa\" returns successfully" Sep 6 00:18:51.040313 systemd[1]: cri-containerd-4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa.scope: Deactivated successfully. Sep 6 00:18:51.072850 env[1184]: time="2025-09-06T00:18:51.072146487Z" level=info msg="shim disconnected" id=4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa Sep 6 00:18:51.072850 env[1184]: time="2025-09-06T00:18:51.072203878Z" level=warning msg="cleaning up after shim disconnected" id=4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa namespace=k8s.io Sep 6 00:18:51.072850 env[1184]: time="2025-09-06T00:18:51.072215290Z" level=info msg="cleaning up dead shim" Sep 6 00:18:51.088550 env[1184]: time="2025-09-06T00:18:51.088484798Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4048 runtime=io.containerd.runc.v2\n" Sep 6 00:18:51.916173 kubelet[1908]: E0906 00:18:51.916138 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:51.921071 env[1184]: time="2025-09-06T00:18:51.921011392Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:18:51.952471 env[1184]: time="2025-09-06T00:18:51.952419477Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c\"" Sep 6 00:18:51.953379 env[1184]: time="2025-09-06T00:18:51.953339491Z" level=info msg="StartContainer for \"827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c\"" Sep 6 00:18:51.977539 systemd[1]: Started cri-containerd-827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c.scope. Sep 6 00:18:52.023308 systemd[1]: cri-containerd-827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c.scope: Deactivated successfully. Sep 6 00:18:52.025641 env[1184]: time="2025-09-06T00:18:52.025458218Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice/cri-containerd-827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c.scope/memory.events\": no such file or directory" Sep 6 00:18:52.027738 env[1184]: time="2025-09-06T00:18:52.027662104Z" level=info msg="StartContainer for \"827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c\" returns successfully" Sep 6 00:18:52.054855 env[1184]: time="2025-09-06T00:18:52.054798711Z" level=info msg="shim disconnected" id=827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c Sep 6 00:18:52.055433 env[1184]: time="2025-09-06T00:18:52.055402958Z" level=warning msg="cleaning up after shim disconnected" id=827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c namespace=k8s.io Sep 6 00:18:52.055576 env[1184]: time="2025-09-06T00:18:52.055559256Z" level=info msg="cleaning up dead shim" Sep 6 00:18:52.069425 env[1184]: time="2025-09-06T00:18:52.069364801Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:18:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4101 runtime=io.containerd.runc.v2\n" Sep 6 00:18:52.301535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c-rootfs.mount: Deactivated successfully. Sep 6 00:18:52.921151 kubelet[1908]: E0906 00:18:52.921115 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:52.924390 env[1184]: time="2025-09-06T00:18:52.924340465Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:18:52.940905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505198918.mount: Deactivated successfully. Sep 6 00:18:52.951765 env[1184]: time="2025-09-06T00:18:52.951688955Z" level=info msg="CreateContainer within sandbox \"a0c226ac1f35116efcbbd40698966d12ef2d027574a013f3373353db9c869a62\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b\"" Sep 6 00:18:52.952789 env[1184]: time="2025-09-06T00:18:52.952754646Z" level=info msg="StartContainer for \"7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b\"" Sep 6 00:18:52.979142 systemd[1]: Started cri-containerd-7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b.scope. Sep 6 00:18:53.016720 env[1184]: time="2025-09-06T00:18:53.016657523Z" level=info msg="StartContainer for \"7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b\" returns successfully" Sep 6 00:18:53.544270 env[1184]: time="2025-09-06T00:18:53.544227922Z" level=info msg="StopPodSandbox for \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\"" Sep 6 00:18:53.544863 env[1184]: time="2025-09-06T00:18:53.544797316Z" level=info msg="TearDown network for sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" successfully" Sep 6 00:18:53.545052 env[1184]: time="2025-09-06T00:18:53.545024021Z" level=info msg="StopPodSandbox for \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" returns successfully" Sep 6 00:18:53.545597 env[1184]: time="2025-09-06T00:18:53.545560843Z" level=info msg="RemovePodSandbox for \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\"" Sep 6 00:18:53.545822 env[1184]: time="2025-09-06T00:18:53.545763467Z" level=info msg="Forcibly stopping sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\"" Sep 6 00:18:53.546128 env[1184]: time="2025-09-06T00:18:53.546089278Z" level=info msg="TearDown network for sandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" successfully" Sep 6 00:18:53.550903 env[1184]: time="2025-09-06T00:18:53.550835175Z" level=info msg="RemovePodSandbox \"5b3df8d4951b43a5a758ab4865c36333e9933fc26c83f8ff0171754ae3def0b6\" returns successfully" Sep 6 00:18:53.551946 env[1184]: time="2025-09-06T00:18:53.551913091Z" level=info msg="StopPodSandbox for \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\"" Sep 6 00:18:53.552260 env[1184]: time="2025-09-06T00:18:53.552202167Z" level=info msg="TearDown network for sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" successfully" Sep 6 00:18:53.552387 env[1184]: time="2025-09-06T00:18:53.552362002Z" level=info msg="StopPodSandbox for \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" returns successfully" Sep 6 00:18:53.552913 env[1184]: time="2025-09-06T00:18:53.552883740Z" level=info msg="RemovePodSandbox for \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\"" Sep 6 00:18:53.553084 env[1184]: time="2025-09-06T00:18:53.553040558Z" level=info msg="Forcibly stopping sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\"" Sep 6 00:18:53.553254 env[1184]: time="2025-09-06T00:18:53.553229959Z" level=info msg="TearDown network for sandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" successfully" Sep 6 00:18:53.556354 env[1184]: time="2025-09-06T00:18:53.556314741Z" level=info msg="RemovePodSandbox \"8f5d572a7f212bf3b0a55d94e09fcac915e9c3e3f017002ed9690627a73b4c84\" returns successfully" Sep 6 00:18:53.557035 env[1184]: time="2025-09-06T00:18:53.557009261Z" level=info msg="StopPodSandbox for \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\"" Sep 6 00:18:53.557285 env[1184]: time="2025-09-06T00:18:53.557233528Z" level=info msg="TearDown network for sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" successfully" Sep 6 00:18:53.557374 env[1184]: time="2025-09-06T00:18:53.557356945Z" level=info msg="StopPodSandbox for \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" returns successfully" Sep 6 00:18:53.557897 env[1184]: time="2025-09-06T00:18:53.557874055Z" level=info msg="RemovePodSandbox for \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\"" Sep 6 00:18:53.558069 env[1184]: time="2025-09-06T00:18:53.558032934Z" level=info msg="Forcibly stopping sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\"" Sep 6 00:18:53.558219 env[1184]: time="2025-09-06T00:18:53.558191693Z" level=info msg="TearDown network for sandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" successfully" Sep 6 00:18:53.560636 env[1184]: time="2025-09-06T00:18:53.560597867Z" level=info msg="RemovePodSandbox \"2b88c6d063783962829481fc9d6a74d251d2c97cfe208c67b29bbb3b52a07074\" returns successfully" Sep 6 00:18:53.620024 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:18:53.927294 kubelet[1908]: W0906 00:18:53.927219 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice/cri-containerd-3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9.scope WatchSource:0}: task 3d4bab447b70e9a5d06bf1b29d3819268ddb4536e532045fbb954b73c8d7fdb9 not found: not found Sep 6 00:18:53.928139 kubelet[1908]: E0906 00:18:53.928113 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:53.960201 kubelet[1908]: I0906 00:18:53.960117 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qbqdq" podStartSLOduration=5.960096812 podStartE2EDuration="5.960096812s" podCreationTimestamp="2025-09-06 00:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:18:53.959025834 +0000 UTC m=+120.663510888" watchObservedRunningTime="2025-09-06 00:18:53.960096812 +0000 UTC m=+120.664581866" Sep 6 00:18:55.293716 kubelet[1908]: E0906 00:18:55.293667 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:57.037779 kubelet[1908]: W0906 00:18:57.037712 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice/cri-containerd-b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9.scope WatchSource:0}: task b2a40cf541f21445ff22f967eff632516dd17b74899a413dfa30b21875764fa9 not found: not found Sep 6 00:18:57.091142 systemd-networkd[1003]: lxc_health: Link UP Sep 6 00:18:57.097905 systemd-networkd[1003]: lxc_health: Gained carrier Sep 6 00:18:57.098136 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:18:57.304424 kubelet[1908]: E0906 00:18:57.303967 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:57.937080 kubelet[1908]: E0906 00:18:57.937016 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:18:58.155362 systemd-networkd[1003]: lxc_health: Gained IPv6LL Sep 6 00:18:58.273549 systemd[1]: run-containerd-runc-k8s.io-7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b-runc.rBd6Ns.mount: Deactivated successfully. Sep 6 00:18:58.939967 kubelet[1908]: E0906 00:18:58.939916 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 6 00:19:00.150900 kubelet[1908]: W0906 00:19:00.150836 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice/cri-containerd-4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa.scope WatchSource:0}: task 4e65a7f1291093938e586b9b45e6583896075b8722ebe93056e09cf25bb9fbaa not found: not found Sep 6 00:19:00.483926 systemd[1]: run-containerd-runc-k8s.io-7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b-runc.sHoWNH.mount: Deactivated successfully. Sep 6 00:19:02.743323 systemd[1]: run-containerd-runc-k8s.io-7193f0b21978275b18fbc26226c2fa53eefc472c9169e931efc14b64aeda736b-runc.iIrfnd.mount: Deactivated successfully. Sep 6 00:19:02.867079 sshd[3705]: pam_unix(sshd:session): session closed for user core Sep 6 00:19:02.871586 systemd[1]: sshd@29-159.223.206.243:22-147.75.109.163:34838.service: Deactivated successfully. Sep 6 00:19:02.872888 systemd[1]: session-30.scope: Deactivated successfully. Sep 6 00:19:02.874078 systemd-logind[1177]: Session 30 logged out. Waiting for processes to exit. Sep 6 00:19:02.875619 systemd-logind[1177]: Removed session 30. Sep 6 00:19:03.274054 kubelet[1908]: W0906 00:19:03.273378 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4395e00_c967_43ff_8eaa_06f9bea276e6.slice/cri-containerd-827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c.scope WatchSource:0}: task 827131acbd824beb31b13ce021016fefe7061d0d8a0cb6aae720f0ef4b55913c not found: not found