Sep 6 00:22:12.045038 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:22:12.045073 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:12.045084 kernel: BIOS-provided physical RAM map: Sep 6 00:22:12.045092 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:22:12.045104 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:22:12.045112 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:22:12.045121 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 6 00:22:12.045127 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 6 00:22:12.045136 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:22:12.045141 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 00:22:12.045147 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:22:12.045153 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:22:12.045158 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 6 00:22:12.045164 kernel: NX (Execute Disable) protection: active Sep 6 00:22:12.045173 kernel: SMBIOS 2.8 present. Sep 6 00:22:12.045180 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 6 00:22:12.045186 kernel: Hypervisor detected: KVM Sep 6 00:22:12.045192 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:22:12.045203 kernel: kvm-clock: cpu 0, msr 9a19f001, primary cpu clock Sep 6 00:22:12.045209 kernel: kvm-clock: using sched offset of 3388224010 cycles Sep 6 00:22:12.045216 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:22:12.045223 kernel: tsc: Detected 2794.748 MHz processor Sep 6 00:22:12.045229 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:22:12.045238 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:22:12.045244 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 6 00:22:12.045251 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:22:12.045257 kernel: Using GB pages for direct mapping Sep 6 00:22:12.045263 kernel: ACPI: Early table checksum verification disabled Sep 6 00:22:12.045270 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 6 00:22:12.045276 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045283 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045289 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045297 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 6 00:22:12.045304 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045310 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045316 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045323 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:22:12.045331 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 6 00:22:12.045340 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 6 00:22:12.045348 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 6 00:22:12.045363 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 6 00:22:12.045370 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 6 00:22:12.045377 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 6 00:22:12.045384 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 6 00:22:12.045390 kernel: No NUMA configuration found Sep 6 00:22:12.045397 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 6 00:22:12.045422 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 6 00:22:12.045437 kernel: Zone ranges: Sep 6 00:22:12.045444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:22:12.045456 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 6 00:22:12.045475 kernel: Normal empty Sep 6 00:22:12.045487 kernel: Movable zone start for each node Sep 6 00:22:12.045497 kernel: Early memory node ranges Sep 6 00:22:12.045504 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:22:12.045511 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 6 00:22:12.045531 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 6 00:22:12.045546 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:22:12.045561 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:22:12.045574 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 6 00:22:12.045583 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:22:12.045590 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:22:12.045597 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:22:12.045608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:22:12.045623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:22:12.045630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:22:12.045654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:22:12.045661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:22:12.045668 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:22:12.045677 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:22:12.045690 kernel: TSC deadline timer available Sep 6 00:22:12.045697 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 6 00:22:12.045713 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 00:22:12.045721 kernel: kvm-guest: setup PV sched yield Sep 6 00:22:12.045755 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 00:22:12.045782 kernel: Booting paravirtualized kernel on KVM Sep 6 00:22:12.045809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:22:12.045817 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 6 00:22:12.045824 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 6 00:22:12.045830 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 6 00:22:12.045837 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 00:22:12.045844 kernel: kvm-guest: setup async PF for cpu 0 Sep 6 00:22:12.045851 kernel: kvm-guest: stealtime: cpu 0, msr 94e1c0c0 Sep 6 00:22:12.045865 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:22:12.045875 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:22:12.045890 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 6 00:22:12.045907 kernel: Policy zone: DMA32 Sep 6 00:22:12.045916 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:12.045923 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:22:12.045935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:22:12.045943 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:22:12.045957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:22:12.045983 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 6 00:22:12.045992 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:22:12.045999 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:22:12.046006 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:22:12.046012 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:22:12.046020 kernel: rcu: RCU event tracing is enabled. Sep 6 00:22:12.046027 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:22:12.046039 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:22:12.046047 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:22:12.046056 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:22:12.046063 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:22:12.046070 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 00:22:12.046082 kernel: random: crng init done Sep 6 00:22:12.046089 kernel: Console: colour VGA+ 80x25 Sep 6 00:22:12.046096 kernel: printk: console [ttyS0] enabled Sep 6 00:22:12.046108 kernel: ACPI: Core revision 20210730 Sep 6 00:22:12.046123 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:22:12.046131 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:22:12.046140 kernel: x2apic enabled Sep 6 00:22:12.046147 kernel: Switched APIC routing to physical x2apic. Sep 6 00:22:12.046169 kernel: kvm-guest: setup PV IPIs Sep 6 00:22:12.046195 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:22:12.046216 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 6 00:22:12.046239 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 6 00:22:12.046247 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 00:22:12.046262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 00:22:12.046269 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 00:22:12.046284 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:22:12.046291 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:22:12.046300 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:22:12.046308 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 00:22:12.046315 kernel: active return thunk: retbleed_return_thunk Sep 6 00:22:12.046322 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 00:22:12.046334 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:22:12.046343 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:22:12.046350 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:22:12.046378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:22:12.046386 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:22:12.046393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:22:12.046401 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:22:12.046408 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:22:12.046415 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:22:12.046422 kernel: LSM: Security Framework initializing Sep 6 00:22:12.046440 kernel: SELinux: Initializing. Sep 6 00:22:12.046448 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:22:12.046455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:22:12.046471 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 00:22:12.046478 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 00:22:12.046493 kernel: ... version: 0 Sep 6 00:22:12.046501 kernel: ... bit width: 48 Sep 6 00:22:12.046508 kernel: ... generic registers: 6 Sep 6 00:22:12.046515 kernel: ... value mask: 0000ffffffffffff Sep 6 00:22:12.046525 kernel: ... max period: 00007fffffffffff Sep 6 00:22:12.046540 kernel: ... fixed-purpose events: 0 Sep 6 00:22:12.046548 kernel: ... event mask: 000000000000003f Sep 6 00:22:12.046562 kernel: signal: max sigframe size: 1776 Sep 6 00:22:12.046570 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:22:12.046585 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:22:12.046597 kernel: x86: Booting SMP configuration: Sep 6 00:22:12.046604 kernel: .... node #0, CPUs: #1 Sep 6 00:22:12.046618 kernel: kvm-clock: cpu 1, msr 9a19f041, secondary cpu clock Sep 6 00:22:12.046636 kernel: kvm-guest: setup async PF for cpu 1 Sep 6 00:22:12.046644 kernel: kvm-guest: stealtime: cpu 1, msr 94e9c0c0 Sep 6 00:22:12.046651 kernel: #2 Sep 6 00:22:12.046658 kernel: kvm-clock: cpu 2, msr 9a19f081, secondary cpu clock Sep 6 00:22:12.046666 kernel: kvm-guest: setup async PF for cpu 2 Sep 6 00:22:12.046673 kernel: kvm-guest: stealtime: cpu 2, msr 94f1c0c0 Sep 6 00:22:12.046683 kernel: #3 Sep 6 00:22:12.046690 kernel: kvm-clock: cpu 3, msr 9a19f0c1, secondary cpu clock Sep 6 00:22:12.046697 kernel: kvm-guest: setup async PF for cpu 3 Sep 6 00:22:12.046704 kernel: kvm-guest: stealtime: cpu 3, msr 94f9c0c0 Sep 6 00:22:12.046713 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:22:12.046720 kernel: smpboot: Max logical packages: 1 Sep 6 00:22:12.046741 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 6 00:22:12.046748 kernel: devtmpfs: initialized Sep 6 00:22:12.046756 kernel: x86/mm: Memory block size: 128MB Sep 6 00:22:12.046763 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:22:12.046770 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:22:12.046777 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:22:12.046785 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:22:12.046794 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:22:12.046801 kernel: audit: type=2000 audit(1757118130.897:1): state=initialized audit_enabled=0 res=1 Sep 6 00:22:12.046808 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:22:12.046816 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:22:12.046823 kernel: cpuidle: using governor menu Sep 6 00:22:12.046830 kernel: ACPI: bus type PCI registered Sep 6 00:22:12.046837 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:22:12.046845 kernel: dca service started, version 1.12.1 Sep 6 00:22:12.046852 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 00:22:12.046861 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 00:22:12.046868 kernel: PCI: Using configuration type 1 for base access Sep 6 00:22:12.046875 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:22:12.046883 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:22:12.046890 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:22:12.046897 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:22:12.046904 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:22:12.046911 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:22:12.046918 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:22:12.046928 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:22:12.046935 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:22:12.046942 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:22:12.046949 kernel: ACPI: Interpreter enabled Sep 6 00:22:12.046957 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:22:12.046972 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:22:12.046980 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:22:12.046987 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 00:22:12.046994 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:22:12.047172 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:22:12.047251 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 00:22:12.047324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 00:22:12.047333 kernel: PCI host bridge to bus 0000:00 Sep 6 00:22:12.047434 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:22:12.047531 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:22:12.047642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:22:12.048439 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 00:22:12.048631 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 00:22:12.048818 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 6 00:22:12.049044 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:22:12.049220 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 00:22:12.049368 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 6 00:22:12.049485 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 6 00:22:12.049636 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 6 00:22:12.049866 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 6 00:22:12.050029 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:22:12.050203 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:22:12.050324 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 6 00:22:12.050404 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 6 00:22:12.050517 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 6 00:22:12.050622 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:22:12.050699 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:22:12.050801 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 6 00:22:12.050880 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 6 00:22:12.050972 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:22:12.051053 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 6 00:22:12.051126 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 6 00:22:12.051199 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 6 00:22:12.051273 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 6 00:22:12.051441 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 00:22:12.051543 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 00:22:12.051659 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 00:22:12.051754 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 6 00:22:12.051830 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 6 00:22:12.051916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 00:22:12.051990 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 6 00:22:12.052000 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:22:12.052007 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:22:12.052015 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:22:12.052022 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:22:12.052032 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 00:22:12.052039 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 00:22:12.052046 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 00:22:12.052054 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 00:22:12.052061 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 00:22:12.052068 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 00:22:12.052075 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 00:22:12.052083 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 00:22:12.052090 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 00:22:12.052099 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 00:22:12.052106 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 00:22:12.052113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 00:22:12.052121 kernel: iommu: Default domain type: Translated Sep 6 00:22:12.052128 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:22:12.052204 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 00:22:12.052278 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:22:12.052351 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 00:22:12.052363 kernel: vgaarb: loaded Sep 6 00:22:12.052371 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:22:12.052378 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:22:12.052385 kernel: PTP clock support registered Sep 6 00:22:12.052393 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:22:12.052400 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:22:12.052407 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:22:12.052415 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 6 00:22:12.052422 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:22:12.052431 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:22:12.052438 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:22:12.052446 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:22:12.052453 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:22:12.052461 kernel: pnp: PnP ACPI init Sep 6 00:22:12.052569 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 00:22:12.052580 kernel: pnp: PnP ACPI: found 6 devices Sep 6 00:22:12.052588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:22:12.052598 kernel: NET: Registered PF_INET protocol family Sep 6 00:22:12.052606 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:22:12.052613 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:22:12.052621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:22:12.052628 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:22:12.052635 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:22:12.052643 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:22:12.052650 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:22:12.052657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:22:12.052666 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:22:12.052674 kernel: NET: Registered PF_XDP protocol family Sep 6 00:22:12.052758 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:22:12.052825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:22:12.052891 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:22:12.052960 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 00:22:12.053030 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 00:22:12.053103 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 6 00:22:12.053118 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:22:12.053126 kernel: Initialise system trusted keyrings Sep 6 00:22:12.053133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:22:12.053140 kernel: Key type asymmetric registered Sep 6 00:22:12.053154 kernel: Asymmetric key parser 'x509' registered Sep 6 00:22:12.053164 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:22:12.053174 kernel: io scheduler mq-deadline registered Sep 6 00:22:12.053186 kernel: io scheduler kyber registered Sep 6 00:22:12.053193 kernel: io scheduler bfq registered Sep 6 00:22:12.053200 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:22:12.053211 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 00:22:12.053218 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 00:22:12.053226 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 00:22:12.053233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:22:12.053240 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:22:12.053248 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:22:12.053255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:22:12.053262 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:22:12.053368 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 00:22:12.053397 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:22:12.053536 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 00:22:12.053637 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T00:22:11 UTC (1757118131) Sep 6 00:22:12.053735 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 00:22:12.053746 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:22:12.053754 kernel: Segment Routing with IPv6 Sep 6 00:22:12.053761 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:22:12.053771 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:22:12.053779 kernel: Key type dns_resolver registered Sep 6 00:22:12.053786 kernel: IPI shorthand broadcast: enabled Sep 6 00:22:12.053793 kernel: sched_clock: Marking stable (454002684, 101693042)->(614236983, -58541257) Sep 6 00:22:12.053801 kernel: registered taskstats version 1 Sep 6 00:22:12.053808 kernel: Loading compiled-in X.509 certificates Sep 6 00:22:12.053815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:22:12.053831 kernel: Key type .fscrypt registered Sep 6 00:22:12.053838 kernel: Key type fscrypt-provisioning registered Sep 6 00:22:12.053848 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:22:12.053856 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:22:12.053863 kernel: ima: No architecture policies found Sep 6 00:22:12.053870 kernel: clk: Disabling unused clocks Sep 6 00:22:12.053878 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:22:12.053885 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:22:12.053898 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:22:12.053909 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:22:12.053924 kernel: Run /init as init process Sep 6 00:22:12.053942 kernel: with arguments: Sep 6 00:22:12.053950 kernel: /init Sep 6 00:22:12.053957 kernel: with environment: Sep 6 00:22:12.053964 kernel: HOME=/ Sep 6 00:22:12.053971 kernel: TERM=linux Sep 6 00:22:12.053986 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:22:12.054007 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:22:12.054027 systemd[1]: Detected virtualization kvm. Sep 6 00:22:12.054047 systemd[1]: Detected architecture x86-64. Sep 6 00:22:12.054066 systemd[1]: Running in initrd. Sep 6 00:22:12.054079 systemd[1]: No hostname configured, using default hostname. Sep 6 00:22:12.054097 systemd[1]: Hostname set to . Sep 6 00:22:12.054107 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:22:12.054115 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:22:12.054123 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:22:12.054131 systemd[1]: Reached target cryptsetup.target. Sep 6 00:22:12.054140 systemd[1]: Reached target paths.target. Sep 6 00:22:12.054148 systemd[1]: Reached target slices.target. Sep 6 00:22:12.054172 systemd[1]: Reached target swap.target. Sep 6 00:22:12.054192 systemd[1]: Reached target timers.target. Sep 6 00:22:12.054206 systemd[1]: Listening on iscsid.socket. Sep 6 00:22:12.054214 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:22:12.054224 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:22:12.054233 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:22:12.054249 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:22:12.054270 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:22:12.054279 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:22:12.054287 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:22:12.054295 systemd[1]: Reached target sockets.target. Sep 6 00:22:12.054304 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:22:12.054314 systemd[1]: Finished network-cleanup.service. Sep 6 00:22:12.054322 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:22:12.054330 systemd[1]: Starting systemd-journald.service... Sep 6 00:22:12.054338 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:22:12.054346 systemd[1]: Starting systemd-resolved.service... Sep 6 00:22:12.054354 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:22:12.054362 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:22:12.054370 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:22:12.054379 kernel: audit: type=1130 audit(1757118132.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.054389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:22:12.054405 systemd-journald[199]: Journal started Sep 6 00:22:12.054459 systemd-journald[199]: Runtime Journal (/run/log/journal/ff716b95f03c4f9e858c304984fe5a07) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:22:12.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.054744 systemd[1]: Started systemd-journald.service. Sep 6 00:22:12.065969 systemd-modules-load[200]: Inserted module 'overlay' Sep 6 00:22:12.098535 kernel: audit: type=1130 audit(1757118132.093:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.074437 systemd-resolved[201]: Positive Trust Anchors: Sep 6 00:22:12.102969 kernel: audit: type=1130 audit(1757118132.098:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.074469 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:22:12.124311 kernel: audit: type=1130 audit(1757118132.102:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.074508 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:22:12.134253 kernel: audit: type=1130 audit(1757118132.123:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.077606 systemd-resolved[201]: Defaulting to hostname 'linux'. Sep 6 00:22:12.094676 systemd[1]: Started systemd-resolved.service. Sep 6 00:22:12.099232 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:22:12.103489 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:22:12.125110 systemd[1]: Reached target nss-lookup.target. Sep 6 00:22:12.130115 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:22:12.150762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:22:12.151087 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:22:12.158217 kernel: audit: type=1130 audit(1757118132.151:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.158252 kernel: Bridge firewalling registered Sep 6 00:22:12.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.155316 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:22:12.157427 systemd-modules-load[200]: Inserted module 'br_netfilter' Sep 6 00:22:12.165438 dracut-cmdline[216]: dracut-dracut-053 Sep 6 00:22:12.167876 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:22:12.176760 kernel: SCSI subsystem initialized Sep 6 00:22:12.189038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:22:12.189078 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:22:12.190286 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:22:12.193009 systemd-modules-load[200]: Inserted module 'dm_multipath' Sep 6 00:22:12.193851 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:22:12.199848 kernel: audit: type=1130 audit(1757118132.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.195785 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:22:12.204560 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:22:12.209254 kernel: audit: type=1130 audit(1757118132.204:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.231757 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:22:12.261774 kernel: iscsi: registered transport (tcp) Sep 6 00:22:12.282822 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:22:12.282858 kernel: QLogic iSCSI HBA Driver Sep 6 00:22:12.315338 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:22:12.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.316980 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:22:12.321381 kernel: audit: type=1130 audit(1757118132.314:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.367756 kernel: raid6: avx2x4 gen() 22636 MB/s Sep 6 00:22:12.384774 kernel: raid6: avx2x4 xor() 6906 MB/s Sep 6 00:22:12.401777 kernel: raid6: avx2x2 gen() 27919 MB/s Sep 6 00:22:12.418759 kernel: raid6: avx2x2 xor() 18270 MB/s Sep 6 00:22:12.435762 kernel: raid6: avx2x1 gen() 19504 MB/s Sep 6 00:22:12.452750 kernel: raid6: avx2x1 xor() 13809 MB/s Sep 6 00:22:12.469763 kernel: raid6: sse2x4 gen() 14085 MB/s Sep 6 00:22:12.486768 kernel: raid6: sse2x4 xor() 6823 MB/s Sep 6 00:22:12.503750 kernel: raid6: sse2x2 gen() 16231 MB/s Sep 6 00:22:12.520751 kernel: raid6: sse2x2 xor() 8152 MB/s Sep 6 00:22:12.537758 kernel: raid6: sse2x1 gen() 9543 MB/s Sep 6 00:22:12.555307 kernel: raid6: sse2x1 xor() 5462 MB/s Sep 6 00:22:12.555329 kernel: raid6: using algorithm avx2x2 gen() 27919 MB/s Sep 6 00:22:12.555338 kernel: raid6: .... xor() 18270 MB/s, rmw enabled Sep 6 00:22:12.557009 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:22:12.569776 kernel: xor: automatically using best checksumming function avx Sep 6 00:22:12.660797 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:22:12.670489 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:22:12.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.671000 audit: BPF prog-id=7 op=LOAD Sep 6 00:22:12.672000 audit: BPF prog-id=8 op=LOAD Sep 6 00:22:12.673326 systemd[1]: Starting systemd-udevd.service... Sep 6 00:22:12.686304 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 6 00:22:12.690995 systemd[1]: Started systemd-udevd.service. Sep 6 00:22:12.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.694420 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:22:12.705943 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 6 00:22:12.737835 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:22:12.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.740344 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:22:12.781242 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:22:12.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:12.867215 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:22:12.873345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:22:12.873366 kernel: GPT:9289727 != 19775487 Sep 6 00:22:12.873379 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:22:12.873391 kernel: GPT:9289727 != 19775487 Sep 6 00:22:12.873402 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:22:12.873454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:22:12.875837 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:22:12.889752 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:22:12.889824 kernel: libata version 3.00 loaded. Sep 6 00:22:12.889836 kernel: AES CTR mode by8 optimization enabled Sep 6 00:22:12.903891 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:22:12.955399 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 00:22:12.955569 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 00:22:12.955583 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 00:22:12.955684 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 00:22:12.955799 kernel: scsi host0: ahci Sep 6 00:22:12.955917 kernel: scsi host1: ahci Sep 6 00:22:12.956024 kernel: scsi host2: ahci Sep 6 00:22:12.956229 kernel: scsi host3: ahci Sep 6 00:22:12.956326 kernel: scsi host4: ahci Sep 6 00:22:12.956419 kernel: scsi host5: ahci Sep 6 00:22:12.956507 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 6 00:22:12.956517 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 6 00:22:12.956537 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 6 00:22:12.956545 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 6 00:22:12.956555 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 6 00:22:12.956566 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 6 00:22:12.956575 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (443) Sep 6 00:22:12.905688 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:22:12.918250 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:22:12.960123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:22:12.968160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:22:12.969877 systemd[1]: Starting disk-uuid.service... Sep 6 00:22:13.052197 disk-uuid[539]: Primary Header is updated. Sep 6 00:22:13.052197 disk-uuid[539]: Secondary Entries is updated. Sep 6 00:22:13.052197 disk-uuid[539]: Secondary Header is updated. Sep 6 00:22:13.056759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:22:13.060756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:22:13.232369 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 00:22:13.232451 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:22:13.232462 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 00:22:13.233755 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:22:13.234766 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 00:22:13.235754 kernel: ata3.00: applying bridge limits Sep 6 00:22:13.235771 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 00:22:13.236757 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:22:13.237757 kernel: ata3.00: configured for UDMA/100 Sep 6 00:22:13.238750 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:22:13.293028 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 00:22:13.310972 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:22:13.310990 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:22:14.062764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:22:14.063175 disk-uuid[540]: The operation has completed successfully. Sep 6 00:22:14.088629 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:22:14.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.088721 systemd[1]: Finished disk-uuid.service. Sep 6 00:22:14.095677 systemd[1]: Starting verity-setup.service... Sep 6 00:22:14.109757 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 6 00:22:14.129749 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:22:14.132184 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:22:14.136622 systemd[1]: Finished verity-setup.service. Sep 6 00:22:14.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.199436 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:22:14.200969 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:22:14.201063 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:22:14.203228 systemd[1]: Starting ignition-setup.service... Sep 6 00:22:14.205398 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:22:14.212345 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:22:14.212386 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:22:14.212399 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:22:14.221820 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:22:14.230625 systemd[1]: Finished ignition-setup.service. Sep 6 00:22:14.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.233383 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:22:14.284037 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:22:14.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.285000 audit: BPF prog-id=9 op=LOAD Sep 6 00:22:14.287097 systemd[1]: Starting systemd-networkd.service... Sep 6 00:22:14.309748 ignition[648]: Ignition 2.14.0 Sep 6 00:22:14.309762 ignition[648]: Stage: fetch-offline Sep 6 00:22:14.309859 ignition[648]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:14.309874 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:14.310614 ignition[648]: parsed url from cmdline: "" Sep 6 00:22:14.313212 systemd-networkd[718]: lo: Link UP Sep 6 00:22:14.310620 ignition[648]: no config URL provided Sep 6 00:22:14.313216 systemd-networkd[718]: lo: Gained carrier Sep 6 00:22:14.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.310627 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:22:14.313831 systemd-networkd[718]: Enumeration completed Sep 6 00:22:14.310638 ignition[648]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:22:14.313912 systemd[1]: Started systemd-networkd.service. Sep 6 00:22:14.310658 ignition[648]: op(1): [started] loading QEMU firmware config module Sep 6 00:22:14.314189 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:22:14.310664 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:22:14.316721 systemd[1]: Reached target network.target. Sep 6 00:22:14.320844 ignition[648]: op(1): [finished] loading QEMU firmware config module Sep 6 00:22:14.317465 systemd-networkd[718]: eth0: Link UP Sep 6 00:22:14.317469 systemd-networkd[718]: eth0: Gained carrier Sep 6 00:22:14.320501 systemd[1]: Starting iscsiuio.service... Sep 6 00:22:14.371324 ignition[648]: parsing config with SHA512: e3d0fb04c6786a7f642006b517b747aba1e259a7ed9a32bab9f43518de662816835e505f027c66fd6912ac24b7442e3b80791ec2a79a17c35e940164d5922cb4 Sep 6 00:22:14.419101 unknown[648]: fetched base config from "system" Sep 6 00:22:14.419116 unknown[648]: fetched user config from "qemu" Sep 6 00:22:14.419938 ignition[648]: fetch-offline: fetch-offline passed Sep 6 00:22:14.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.420950 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:22:14.420022 ignition[648]: Ignition finished successfully Sep 6 00:22:14.421596 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:22:14.422627 systemd[1]: Starting ignition-kargs.service... Sep 6 00:22:14.425875 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:22:14.433572 ignition[725]: Ignition 2.14.0 Sep 6 00:22:14.433582 ignition[725]: Stage: kargs Sep 6 00:22:14.433766 ignition[725]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:14.433781 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:14.435141 ignition[725]: kargs: kargs passed Sep 6 00:22:14.437786 systemd[1]: Finished ignition-kargs.service. Sep 6 00:22:14.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.435191 ignition[725]: Ignition finished successfully Sep 6 00:22:14.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.439109 systemd[1]: Started iscsiuio.service. Sep 6 00:22:14.441626 systemd[1]: Starting ignition-disks.service... Sep 6 00:22:14.443455 systemd[1]: Starting iscsid.service... Sep 6 00:22:14.447798 iscsid[733]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:22:14.447798 iscsid[733]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:22:14.447798 iscsid[733]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:22:14.447798 iscsid[733]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:22:14.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.459902 iscsid[733]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:22:14.459902 iscsid[733]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:22:14.449947 ignition[732]: Ignition 2.14.0 Sep 6 00:22:14.450819 systemd[1]: Started iscsid.service. Sep 6 00:22:14.449954 ignition[732]: Stage: disks Sep 6 00:22:14.455709 systemd[1]: Finished ignition-disks.service. Sep 6 00:22:14.450073 ignition[732]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:14.457205 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:22:14.450086 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:14.459064 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:22:14.451301 ignition[732]: disks: disks passed Sep 6 00:22:14.459880 systemd[1]: Reached target local-fs.target. Sep 6 00:22:14.451348 ignition[732]: Ignition finished successfully Sep 6 00:22:14.460769 systemd[1]: Reached target sysinit.target. Sep 6 00:22:14.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.462881 systemd[1]: Reached target basic.target. Sep 6 00:22:14.464351 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:22:14.475339 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:22:14.476786 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:22:14.478372 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:22:14.479318 systemd[1]: Reached target remote-fs.target. Sep 6 00:22:14.481693 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:22:14.488120 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:22:14.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.489443 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:22:14.500078 systemd-fsck[754]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:22:14.505337 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:22:14.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.506977 systemd[1]: Mounting sysroot.mount... Sep 6 00:22:14.513765 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:22:14.513864 systemd[1]: Mounted sysroot.mount. Sep 6 00:22:14.514132 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:22:14.516614 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:22:14.518105 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:22:14.518138 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:22:14.518158 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:22:14.524303 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:22:14.525668 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:22:14.530606 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:22:14.533574 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:22:14.536354 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:22:14.539189 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:22:14.573575 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:22:14.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.576157 systemd-resolved[201]: Detected conflict on linux IN A 10.0.0.130 Sep 6 00:22:14.576161 systemd[1]: Starting ignition-mount.service... Sep 6 00:22:14.576175 systemd-resolved[201]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 6 00:22:14.577788 systemd[1]: Starting sysroot-boot.service... Sep 6 00:22:14.585923 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:22:14.616005 ignition[806]: INFO : Ignition 2.14.0 Sep 6 00:22:14.617060 ignition[806]: INFO : Stage: mount Sep 6 00:22:14.617060 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:14.618754 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:14.618754 ignition[806]: INFO : mount: mount passed Sep 6 00:22:14.618754 ignition[806]: INFO : Ignition finished successfully Sep 6 00:22:14.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:14.618969 systemd[1]: Finished ignition-mount.service. Sep 6 00:22:14.626012 systemd[1]: Finished sysroot-boot.service. Sep 6 00:22:14.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:15.142482 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:22:15.149313 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 6 00:22:15.149342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:22:15.149352 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:22:15.150121 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:22:15.154184 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:22:15.155233 systemd[1]: Starting ignition-files.service... Sep 6 00:22:15.174388 ignition[835]: INFO : Ignition 2.14.0 Sep 6 00:22:15.174388 ignition[835]: INFO : Stage: files Sep 6 00:22:15.176141 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:15.176141 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:15.176141 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:22:15.179871 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:22:15.179871 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:22:15.179871 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:22:15.179871 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:22:15.185526 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:22:15.185526 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 00:22:15.185526 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 6 00:22:15.180185 unknown[835]: wrote ssh authorized keys file for user: core Sep 6 00:22:15.252883 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:22:15.817007 systemd-networkd[718]: eth0: Gained IPv6LL Sep 6 00:22:16.042838 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 6 00:22:16.044810 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:22:16.046446 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:22:16.285786 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:22:16.397625 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:22:16.399748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 6 00:22:16.778038 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:22:17.477988 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:22:17.477988 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 00:22:17.481568 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:22:17.483387 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:22:17.483387 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 00:22:17.486280 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 00:22:17.486280 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:22:17.489301 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:22:17.489301 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 00:22:17.489301 ignition[835]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:22:17.493601 ignition[835]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:22:17.567617 ignition[835]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:22:17.569512 ignition[835]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:22:17.569512 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:22:17.569512 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:22:17.569512 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:22:17.569512 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:22:17.569512 ignition[835]: INFO : files: files passed Sep 6 00:22:17.569512 ignition[835]: INFO : Ignition finished successfully Sep 6 00:22:17.578954 systemd[1]: Finished ignition-files.service. Sep 6 00:22:17.584943 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 6 00:22:17.584970 kernel: audit: type=1130 audit(1757118137.578:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.580438 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:22:17.584915 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:22:17.589639 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:22:17.594817 kernel: audit: type=1130 audit(1757118137.589:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.585569 systemd[1]: Starting ignition-quench.service... Sep 6 00:22:17.601901 kernel: audit: type=1130 audit(1757118137.594:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.601916 kernel: audit: type=1131 audit(1757118137.594:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.602025 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:22:17.586913 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:22:17.589808 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:22:17.589875 systemd[1]: Finished ignition-quench.service. Sep 6 00:22:17.595005 systemd[1]: Reached target ignition-complete.target. Sep 6 00:22:17.602627 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:22:17.617050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:22:17.617130 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:22:17.625886 kernel: audit: type=1130 audit(1757118137.618:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.625902 kernel: audit: type=1131 audit(1757118137.618:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.618830 systemd[1]: Reached target initrd-fs.target. Sep 6 00:22:17.625894 systemd[1]: Reached target initrd.target. Sep 6 00:22:17.626635 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:22:17.627296 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:22:17.637474 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:22:17.642317 kernel: audit: type=1130 audit(1757118137.637:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.638944 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:22:17.647326 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:22:17.648185 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:22:17.649703 systemd[1]: Stopped target timers.target. Sep 6 00:22:17.651144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:22:17.656119 kernel: audit: type=1131 audit(1757118137.651:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.651234 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:22:17.652653 systemd[1]: Stopped target initrd.target. Sep 6 00:22:17.657014 systemd[1]: Stopped target basic.target. Sep 6 00:22:17.658374 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:22:17.659891 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:22:17.661376 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:22:17.663026 systemd[1]: Stopped target remote-fs.target. Sep 6 00:22:17.664598 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:22:17.666199 systemd[1]: Stopped target sysinit.target. Sep 6 00:22:17.667634 systemd[1]: Stopped target local-fs.target. Sep 6 00:22:17.669120 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:22:17.670603 systemd[1]: Stopped target swap.target. Sep 6 00:22:17.677563 kernel: audit: type=1131 audit(1757118137.673:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.671971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:22:17.672081 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:22:17.683743 kernel: audit: type=1131 audit(1757118137.679:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.673551 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:22:17.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.677616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:22:17.677704 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:22:17.679381 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:22:17.679480 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:22:17.683872 systemd[1]: Stopped target paths.target. Sep 6 00:22:17.685255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:22:17.688787 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:22:17.690297 systemd[1]: Stopped target slices.target. Sep 6 00:22:17.691964 systemd[1]: Stopped target sockets.target. Sep 6 00:22:17.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.693468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:22:17.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.693568 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:22:17.699618 iscsid[733]: iscsid shutting down. Sep 6 00:22:17.695061 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:22:17.695148 systemd[1]: Stopped ignition-files.service. Sep 6 00:22:17.697166 systemd[1]: Stopping ignition-mount.service... Sep 6 00:22:17.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.698265 systemd[1]: Stopping iscsid.service... Sep 6 00:22:17.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.706671 ignition[875]: INFO : Ignition 2.14.0 Sep 6 00:22:17.706671 ignition[875]: INFO : Stage: umount Sep 6 00:22:17.706671 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:22:17.706671 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:22:17.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.699568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:22:17.714228 ignition[875]: INFO : umount: umount passed Sep 6 00:22:17.714228 ignition[875]: INFO : Ignition finished successfully Sep 6 00:22:17.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.699703 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:22:17.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.701795 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:22:17.702775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:22:17.702955 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:22:17.704473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:22:17.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.704598 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:22:17.707930 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:22:17.708013 systemd[1]: Stopped iscsid.service. Sep 6 00:22:17.709306 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:22:17.709375 systemd[1]: Stopped ignition-mount.service. Sep 6 00:22:17.711635 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:22:17.711711 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:22:17.713206 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:22:17.713233 systemd[1]: Closed iscsid.socket. Sep 6 00:22:17.714207 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:22:17.714239 systemd[1]: Stopped ignition-disks.service. Sep 6 00:22:17.715770 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:22:17.715802 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:22:17.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.717271 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:22:17.717715 systemd[1]: Stopped ignition-setup.service. Sep 6 00:22:17.720182 systemd[1]: Stopping iscsiuio.service... Sep 6 00:22:17.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.722826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:22:17.723204 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:22:17.744000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:22:17.723289 systemd[1]: Stopped iscsiuio.service. Sep 6 00:22:17.724752 systemd[1]: Stopped target network.target. Sep 6 00:22:17.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.726462 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:22:17.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.726494 systemd[1]: Closed iscsiuio.socket. Sep 6 00:22:17.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.728204 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:22:17.729622 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:22:17.735822 systemd-networkd[718]: eth0: DHCPv6 lease lost Sep 6 00:22:17.755000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:22:17.737083 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:22:17.737166 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:22:17.740716 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:22:17.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.740832 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:22:17.744179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:22:17.744206 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:22:17.746583 systemd[1]: Stopping network-cleanup.service... Sep 6 00:22:17.747317 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:22:17.747358 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:22:17.749064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:22:17.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.749143 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:22:17.751044 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:22:17.751077 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:22:17.752412 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:22:17.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.755190 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:22:17.758359 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:22:17.758487 systemd[1]: Stopped network-cleanup.service. Sep 6 00:22:17.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.766456 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:22:17.766579 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:22:17.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.769379 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:22:17.769426 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:22:17.771290 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:22:17.771423 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:22:17.773106 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:22:17.773166 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:22:17.774835 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:22:17.774882 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:22:17.776586 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:22:17.776620 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:22:17.778558 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:22:17.779870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:22:17.779925 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:22:17.784513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:22:17.784590 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:22:17.834768 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:22:17.834887 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:22:17.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.836807 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:22:17.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:17.838163 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:22:17.838205 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:22:17.839307 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:22:17.855903 systemd[1]: Switching root. Sep 6 00:22:17.874121 systemd-journald[199]: Journal stopped Sep 6 00:22:21.901672 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Sep 6 00:22:21.901741 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:22:21.901762 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:22:21.901779 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:22:21.901790 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:22:21.901800 kernel: SELinux: policy capability open_perms=1 Sep 6 00:22:21.901810 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:22:21.901820 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:22:21.901830 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:22:21.901844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:22:21.901854 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:22:21.901863 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:22:21.901878 systemd[1]: Successfully loaded SELinux policy in 38.706ms. Sep 6 00:22:21.901897 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.337ms. Sep 6 00:22:21.901909 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:22:21.901919 systemd[1]: Detected virtualization kvm. Sep 6 00:22:21.901934 systemd[1]: Detected architecture x86-64. Sep 6 00:22:21.901947 systemd[1]: Detected first boot. Sep 6 00:22:21.901958 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:22:21.901968 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:22:21.901978 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:22:21.901989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:22:21.902000 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:22:21.902012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:22:21.902025 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:22:21.902035 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:22:21.902046 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:22:21.902056 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:22:21.902067 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:22:21.902077 systemd[1]: Created slice system-getty.slice. Sep 6 00:22:21.902087 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:22:21.902097 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:22:21.902111 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:22:21.902123 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:22:21.902133 systemd[1]: Created slice user.slice. Sep 6 00:22:21.902144 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:22:21.902154 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:22:21.902165 systemd[1]: Set up automount boot.automount. Sep 6 00:22:21.902176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:22:21.902186 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:22:21.902197 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:22:21.902209 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:22:21.902219 systemd[1]: Reached target integritysetup.target. Sep 6 00:22:21.902229 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:22:21.902240 systemd[1]: Reached target remote-fs.target. Sep 6 00:22:21.902250 systemd[1]: Reached target slices.target. Sep 6 00:22:21.902260 systemd[1]: Reached target swap.target. Sep 6 00:22:21.902281 systemd[1]: Reached target torcx.target. Sep 6 00:22:21.902291 systemd[1]: Reached target veritysetup.target. Sep 6 00:22:21.902301 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:22:21.902314 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:22:21.902324 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:22:21.902334 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:22:21.902345 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:22:21.902355 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:22:21.902365 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:22:21.902377 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:22:21.902387 systemd[1]: Mounting media.mount... Sep 6 00:22:21.902400 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:21.902412 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:22:21.902422 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:22:21.902433 systemd[1]: Mounting tmp.mount... Sep 6 00:22:21.902443 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:22:21.902454 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:21.902464 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:22:21.902475 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:22:21.902485 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:21.902496 systemd[1]: Starting modprobe@drm.service... Sep 6 00:22:21.902507 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:21.902519 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:22:21.902529 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:21.902540 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:22:21.902550 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:22:21.902560 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:22:21.902570 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:22:21.902580 kernel: loop: module loaded Sep 6 00:22:21.902590 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:22:21.902603 systemd[1]: Stopped systemd-journald.service. Sep 6 00:22:21.902613 kernel: fuse: init (API version 7.34) Sep 6 00:22:21.902625 systemd[1]: Starting systemd-journald.service... Sep 6 00:22:21.902635 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:22:21.902649 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:22:21.902659 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:22:21.902670 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:22:21.902681 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:22:21.902691 systemd[1]: Stopped verity-setup.service. Sep 6 00:22:21.902701 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:21.902715 systemd-journald[986]: Journal started Sep 6 00:22:21.902766 systemd-journald[986]: Runtime Journal (/run/log/journal/ff716b95f03c4f9e858c304984fe5a07) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:22:17.935000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:22:18.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:22:18.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:22:18.300000 audit: BPF prog-id=10 op=LOAD Sep 6 00:22:18.300000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:22:18.300000 audit: BPF prog-id=11 op=LOAD Sep 6 00:22:18.300000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:22:18.341000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:22:18.341000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:18.341000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:22:18.343000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:22:18.343000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a5 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:18.343000 audit: CWD cwd="/" Sep 6 00:22:18.343000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:18.343000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:18.343000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:22:21.769000 audit: BPF prog-id=12 op=LOAD Sep 6 00:22:21.769000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:22:21.769000 audit: BPF prog-id=13 op=LOAD Sep 6 00:22:21.769000 audit: BPF prog-id=14 op=LOAD Sep 6 00:22:21.769000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:22:21.769000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:22:21.770000 audit: BPF prog-id=15 op=LOAD Sep 6 00:22:21.770000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:22:21.771000 audit: BPF prog-id=16 op=LOAD Sep 6 00:22:21.771000 audit: BPF prog-id=17 op=LOAD Sep 6 00:22:21.771000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:22:21.771000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:22:21.771000 audit: BPF prog-id=18 op=LOAD Sep 6 00:22:21.771000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:22:21.771000 audit: BPF prog-id=19 op=LOAD Sep 6 00:22:21.771000 audit: BPF prog-id=20 op=LOAD Sep 6 00:22:21.772000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:22:21.772000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:22:21.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.780000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:22:21.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.881000 audit: BPF prog-id=21 op=LOAD Sep 6 00:22:21.881000 audit: BPF prog-id=22 op=LOAD Sep 6 00:22:21.881000 audit: BPF prog-id=23 op=LOAD Sep 6 00:22:21.881000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:22:21.881000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:22:21.900000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:22:21.900000 audit[986]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca92a6850 a2=4000 a3=7ffca92a68ec items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:21.900000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:22:21.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.768874 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:22:18.340537 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:22:21.768885 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:22:18.340786 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:22:21.773011 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:22:18.340804 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:22:18.340836 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:22:18.340846 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:22:18.340874 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:22:18.340886 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:22:18.341112 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:22:18.341185 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:22:18.341201 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:22:18.341865 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:22:18.341897 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:22:18.341913 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:22:18.341928 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:22:18.341943 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:22:18.341955 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:22:21.494519 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:22:21.494793 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:22:21.494909 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:22:21.495099 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:22:21.495146 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:22:21.495215 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-09-06T00:22:21Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:22:21.905846 systemd[1]: Started systemd-journald.service. Sep 6 00:22:21.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.906501 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:22:21.907348 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:22:21.908149 systemd[1]: Mounted media.mount. Sep 6 00:22:21.908901 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:22:21.909757 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:22:21.910662 systemd[1]: Mounted tmp.mount. Sep 6 00:22:21.911576 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:22:21.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.912611 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:22:21.912745 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:22:21.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.913781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:21.913934 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.915063 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:22:21.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.916089 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:22:21.916236 systemd[1]: Finished modprobe@drm.service. Sep 6 00:22:21.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.917341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:21.917487 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:21.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.918630 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:22:21.918844 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:22:21.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.919817 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:21.919921 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.921015 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:22:21.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.922171 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:22:21.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.923332 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:22:21.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.924587 systemd[1]: Reached target network-pre.target. Sep 6 00:22:21.926702 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:22:21.928694 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:22:21.929455 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:22:21.931100 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:22:21.933080 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:22:21.934057 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:22:21.935297 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:22:21.937222 systemd-journald[986]: Time spent on flushing to /var/log/journal/ff716b95f03c4f9e858c304984fe5a07 is 33.778ms for 1106 entries. Sep 6 00:22:21.937222 systemd-journald[986]: System Journal (/var/log/journal/ff716b95f03c4f9e858c304984fe5a07) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:22:21.996425 systemd-journald[986]: Received client request to flush runtime journal. Sep 6 00:22:21.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.936203 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:22:21.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:21.938619 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:22:21.940674 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:22:21.943498 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:22:21.998790 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:22:21.944712 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:22:21.946546 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:22:21.947647 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:22:21.955761 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:22:21.958858 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:22:21.959922 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:22:21.961853 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:22:21.997400 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:22:22.753685 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:22:22.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.755535 kernel: kauditd_printk_skb: 105 callbacks suppressed Sep 6 00:22:22.755611 kernel: audit: type=1130 audit(1757118142.753:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.757000 audit: BPF prog-id=24 op=LOAD Sep 6 00:22:22.759439 kernel: audit: type=1334 audit(1757118142.757:142): prog-id=24 op=LOAD Sep 6 00:22:22.759491 kernel: audit: type=1334 audit(1757118142.758:143): prog-id=25 op=LOAD Sep 6 00:22:22.758000 audit: BPF prog-id=25 op=LOAD Sep 6 00:22:22.760252 systemd[1]: Starting systemd-udevd.service... Sep 6 00:22:22.758000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:22:22.758000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:22:22.760746 kernel: audit: type=1334 audit(1757118142.758:144): prog-id=7 op=UNLOAD Sep 6 00:22:22.760771 kernel: audit: type=1334 audit(1757118142.758:145): prog-id=8 op=UNLOAD Sep 6 00:22:22.777844 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Sep 6 00:22:22.791062 systemd[1]: Started systemd-udevd.service. Sep 6 00:22:22.795752 kernel: audit: type=1130 audit(1757118142.791:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.794000 audit: BPF prog-id=26 op=LOAD Sep 6 00:22:22.798750 kernel: audit: type=1334 audit(1757118142.794:147): prog-id=26 op=LOAD Sep 6 00:22:22.798972 systemd[1]: Starting systemd-networkd.service... Sep 6 00:22:22.803000 audit: BPF prog-id=27 op=LOAD Sep 6 00:22:22.803000 audit: BPF prog-id=28 op=LOAD Sep 6 00:22:22.805946 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:22:22.806644 kernel: audit: type=1334 audit(1757118142.803:148): prog-id=27 op=LOAD Sep 6 00:22:22.806708 kernel: audit: type=1334 audit(1757118142.803:149): prog-id=28 op=LOAD Sep 6 00:22:22.806768 kernel: audit: type=1334 audit(1757118142.803:150): prog-id=29 op=LOAD Sep 6 00:22:22.803000 audit: BPF prog-id=29 op=LOAD Sep 6 00:22:22.822698 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:22:22.836159 systemd[1]: Started systemd-userdbd.service. Sep 6 00:22:22.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.873767 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:22:22.884776 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:22:22.888780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:22:22.893217 systemd-networkd[1025]: lo: Link UP Sep 6 00:22:22.893560 systemd-networkd[1025]: lo: Gained carrier Sep 6 00:22:22.894034 systemd-networkd[1025]: Enumeration completed Sep 6 00:22:22.894208 systemd[1]: Started systemd-networkd.service. Sep 6 00:22:22.894218 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:22:22.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:22.895589 systemd-networkd[1025]: eth0: Link UP Sep 6 00:22:22.895694 systemd-networkd[1025]: eth0: Gained carrier Sep 6 00:22:22.899000 audit[1019]: AVC avc: denied { confidentiality } for pid=1019 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:22:22.899000 audit[1019]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5595609d36c0 a1=338ec a2=7fc3aba94bc5 a3=5 items=110 ppid=1015 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:22.899000 audit: CWD cwd="/" Sep 6 00:22:22.899000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=1 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=2 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=3 name=(null) inode=14810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=4 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=5 name=(null) inode=14811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=6 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=7 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=8 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=9 name=(null) inode=14813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=10 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=11 name=(null) inode=14814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=12 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=13 name=(null) inode=14815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=14 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=15 name=(null) inode=14816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=16 name=(null) inode=14812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=17 name=(null) inode=14817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=18 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=19 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=20 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=21 name=(null) inode=14819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=22 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=23 name=(null) inode=14820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=24 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=25 name=(null) inode=14821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=26 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=27 name=(null) inode=14822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=28 name=(null) inode=14818 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=29 name=(null) inode=14823 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=30 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=31 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=32 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=33 name=(null) inode=14825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=34 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=35 name=(null) inode=14826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=36 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=37 name=(null) inode=14827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=38 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=39 name=(null) inode=14828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=40 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=41 name=(null) inode=14829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=42 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=43 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=44 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=45 name=(null) inode=14831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=46 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=47 name=(null) inode=14832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=48 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=49 name=(null) inode=14833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=50 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=51 name=(null) inode=14834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=52 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=53 name=(null) inode=14835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=55 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=56 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=57 name=(null) inode=14837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=58 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=59 name=(null) inode=14838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=60 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=61 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=62 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=63 name=(null) inode=14840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=64 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=65 name=(null) inode=14841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=66 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=67 name=(null) inode=14842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=68 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=69 name=(null) inode=14843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=70 name=(null) inode=14839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=71 name=(null) inode=14844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=72 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=73 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=74 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=75 name=(null) inode=14846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=76 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=77 name=(null) inode=14847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=78 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=79 name=(null) inode=14848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=80 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=81 name=(null) inode=14849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=82 name=(null) inode=14845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=83 name=(null) inode=14850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=84 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=85 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=86 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=87 name=(null) inode=14852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=88 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=89 name=(null) inode=14853 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=90 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=91 name=(null) inode=14854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=92 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=93 name=(null) inode=14855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=94 name=(null) inode=14851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=95 name=(null) inode=14856 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=96 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=97 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=98 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=99 name=(null) inode=14858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=100 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=101 name=(null) inode=14859 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=102 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=103 name=(null) inode=14860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=104 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=105 name=(null) inode=14861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=106 name=(null) inode=14857 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=107 name=(null) inode=14862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PATH item=109 name=(null) inode=14863 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:22:22.899000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:22:22.909886 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:22:22.926786 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:22:22.933790 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:22:22.936751 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 00:22:22.940219 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 00:22:22.940357 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 00:22:22.987152 kernel: kvm: Nested Virtualization enabled Sep 6 00:22:22.987228 kernel: SVM: kvm: Nested Paging enabled Sep 6 00:22:22.987973 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 6 00:22:22.988099 kernel: SVM: Virtual GIF supported Sep 6 00:22:23.004873 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:22:23.028203 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:22:23.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.030462 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:22:23.038318 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:22:23.068659 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:22:23.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.069633 systemd[1]: Reached target cryptsetup.target. Sep 6 00:22:23.071543 systemd[1]: Starting lvm2-activation.service... Sep 6 00:22:23.075290 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:22:23.102741 systemd[1]: Finished lvm2-activation.service. Sep 6 00:22:23.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.103594 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:22:23.104406 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:22:23.104428 systemd[1]: Reached target local-fs.target. Sep 6 00:22:23.105171 systemd[1]: Reached target machines.target. Sep 6 00:22:23.106832 systemd[1]: Starting ldconfig.service... Sep 6 00:22:23.107712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.107781 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.108657 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:22:23.110328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:22:23.112389 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:22:23.114298 systemd[1]: Starting systemd-sysext.service... Sep 6 00:22:23.115358 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1053 (bootctl) Sep 6 00:22:23.118867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:22:23.123713 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:22:23.128669 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:22:23.128830 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:22:23.135895 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:22:23.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.137746 kernel: loop0: detected capacity change from 0 to 224512 Sep 6 00:22:23.212739 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Sep 6 00:22:23.212739 systemd-fsck[1063]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:22:23.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.214460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:22:23.223945 systemd[1]: Mounting boot.mount... Sep 6 00:22:23.479121 systemd[1]: Mounted boot.mount. Sep 6 00:22:23.486744 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:22:23.491667 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:22:23.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.499756 kernel: loop1: detected capacity change from 0 to 224512 Sep 6 00:22:23.505468 (sd-sysext)[1068]: Using extensions 'kubernetes'. Sep 6 00:22:23.505901 (sd-sysext)[1068]: Merged extensions into '/usr'. Sep 6 00:22:23.520890 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:23.522472 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:22:23.523366 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.524755 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:23.526966 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:23.530024 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:23.530804 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.530911 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.531018 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:23.533549 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:22:23.534815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:23.534928 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:23.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.536399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:23.536553 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:23.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.537803 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:23.537919 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:23.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.539210 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:22:23.539322 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.540298 systemd[1]: Finished systemd-sysext.service. Sep 6 00:22:23.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.542381 systemd[1]: Starting ensure-sysext.service... Sep 6 00:22:23.543939 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:22:23.551667 systemd[1]: Reloading. Sep 6 00:22:23.591026 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:22:23.597094 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:22:23.601529 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:22:23.676507 ldconfig[1052]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:22:23.684286 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-06T00:22:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:22:23.684311 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-06T00:22:23Z" level=info msg="torcx already run" Sep 6 00:22:23.760338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:22:23.760353 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:22:23.777322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:22:23.830164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:22:23.831000 audit: BPF prog-id=30 op=LOAD Sep 6 00:22:23.832000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:22:23.833000 audit: BPF prog-id=31 op=LOAD Sep 6 00:22:23.833000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:22:23.833000 audit: BPF prog-id=32 op=LOAD Sep 6 00:22:23.833000 audit: BPF prog-id=33 op=LOAD Sep 6 00:22:23.833000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:22:23.833000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:22:23.834000 audit: BPF prog-id=34 op=LOAD Sep 6 00:22:23.834000 audit: BPF prog-id=27 op=UNLOAD Sep 6 00:22:23.834000 audit: BPF prog-id=35 op=LOAD Sep 6 00:22:23.834000 audit: BPF prog-id=36 op=LOAD Sep 6 00:22:23.834000 audit: BPF prog-id=28 op=UNLOAD Sep 6 00:22:23.834000 audit: BPF prog-id=29 op=UNLOAD Sep 6 00:22:23.835000 audit: BPF prog-id=37 op=LOAD Sep 6 00:22:23.835000 audit: BPF prog-id=38 op=LOAD Sep 6 00:22:23.835000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:22:23.835000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:22:23.838343 systemd[1]: Finished ldconfig.service. Sep 6 00:22:23.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.839446 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:22:23.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.841317 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:22:23.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.844963 systemd[1]: Starting audit-rules.service... Sep 6 00:22:23.846691 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:22:23.848488 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:22:23.849000 audit: BPF prog-id=39 op=LOAD Sep 6 00:22:23.850813 systemd[1]: Starting systemd-resolved.service... Sep 6 00:22:23.851000 audit: BPF prog-id=40 op=LOAD Sep 6 00:22:23.853050 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:22:23.854745 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:22:23.856095 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:22:23.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.859000 audit[1143]: SYSTEM_BOOT pid=1143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.864261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.865740 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:23.870419 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:23.872591 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:23.873496 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.873800 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.873958 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:22:23.875434 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:22:23.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.876888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:23.876998 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:23.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.878473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:23.878566 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:23.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.880104 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:23.880199 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:23.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:22:23.884000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:22:23.884000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf98a3730 a2=420 a3=0 items=0 ppid=1137 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:22:23.884000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:22:23.895048 augenrules[1160]: No rules Sep 6 00:22:23.883092 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:22:23.886352 systemd[1]: Finished audit-rules.service. Sep 6 00:22:23.888556 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.889966 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:23.891689 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:23.893591 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:23.894370 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.894470 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.895644 systemd[1]: Starting systemd-update-done.service... Sep 6 00:22:23.896669 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:22:23.897526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:23.897638 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:23.898994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:23.899104 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:23.900393 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:23.900532 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:23.902052 systemd[1]: Finished systemd-update-done.service. Sep 6 00:22:23.905908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.907071 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:22:23.908950 systemd[1]: Starting modprobe@drm.service... Sep 6 00:22:23.910826 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:22:23.912823 systemd[1]: Starting modprobe@loop.service... Sep 6 00:22:23.913657 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.913772 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.914925 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:22:23.915970 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:22:23.916554 systemd-resolved[1140]: Positive Trust Anchors: Sep 6 00:22:23.916775 systemd-resolved[1140]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:22:23.916855 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:22:23.916955 systemd-resolved[1140]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:22:23.918396 systemd-timesyncd[1142]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:22:23.918438 systemd-timesyncd[1142]: Initial clock synchronization to Sat 2025-09-06 00:22:23.804786 UTC. Sep 6 00:22:23.918457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:22:23.918561 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:22:23.919920 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:22:23.920025 systemd[1]: Finished modprobe@drm.service. Sep 6 00:22:23.921140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:22:23.921252 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:22:23.922486 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:22:23.922586 systemd[1]: Finished modprobe@loop.service. Sep 6 00:22:23.923980 systemd[1]: Reached target time-set.target. Sep 6 00:22:23.924930 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:22:23.924963 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.925232 systemd[1]: Finished ensure-sysext.service. Sep 6 00:22:23.932249 systemd-resolved[1140]: Defaulting to hostname 'linux'. Sep 6 00:22:23.933788 systemd[1]: Started systemd-resolved.service. Sep 6 00:22:23.934638 systemd[1]: Reached target network.target. Sep 6 00:22:23.935521 systemd[1]: Reached target nss-lookup.target. Sep 6 00:22:23.936340 systemd[1]: Reached target sysinit.target. Sep 6 00:22:23.937163 systemd[1]: Started motdgen.path. Sep 6 00:22:23.937889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:22:23.939115 systemd[1]: Started logrotate.timer. Sep 6 00:22:23.939906 systemd[1]: Started mdadm.timer. Sep 6 00:22:23.940590 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:22:23.941445 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:22:23.941465 systemd[1]: Reached target paths.target. Sep 6 00:22:23.942212 systemd[1]: Reached target timers.target. Sep 6 00:22:23.943252 systemd[1]: Listening on dbus.socket. Sep 6 00:22:23.944799 systemd[1]: Starting docker.socket... Sep 6 00:22:23.947924 systemd[1]: Listening on sshd.socket. Sep 6 00:22:23.948834 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.949196 systemd[1]: Listening on docker.socket. Sep 6 00:22:23.949993 systemd[1]: Reached target sockets.target. Sep 6 00:22:23.950773 systemd[1]: Reached target basic.target. Sep 6 00:22:23.951534 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.951558 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:22:23.952459 systemd[1]: Starting containerd.service... Sep 6 00:22:23.954145 systemd[1]: Starting dbus.service... Sep 6 00:22:23.955537 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:22:23.957473 systemd[1]: Starting extend-filesystems.service... Sep 6 00:22:23.958441 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:22:23.959911 systemd[1]: Starting motdgen.service... Sep 6 00:22:23.961568 systemd[1]: Starting prepare-helm.service... Sep 6 00:22:23.963315 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:22:23.963633 jq[1179]: false Sep 6 00:22:23.965088 systemd[1]: Starting sshd-keygen.service... Sep 6 00:22:23.968196 systemd[1]: Starting systemd-logind.service... Sep 6 00:22:23.969854 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:22:23.969973 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:22:23.970454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:22:23.971264 systemd[1]: Starting update-engine.service... Sep 6 00:22:23.973234 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:22:23.978643 jq[1195]: true Sep 6 00:22:23.978540 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:22:23.981774 dbus-daemon[1178]: [system] SELinux support is enabled Sep 6 00:22:23.979031 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:22:23.980236 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:22:23.981921 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:22:23.982969 systemd[1]: Started dbus.service. Sep 6 00:22:23.984665 extend-filesystems[1180]: Found loop1 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found sr0 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda1 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda2 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda3 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found usr Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda4 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda6 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda7 Sep 6 00:22:23.984665 extend-filesystems[1180]: Found vda9 Sep 6 00:22:23.984665 extend-filesystems[1180]: Checking size of /dev/vda9 Sep 6 00:22:23.986354 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:22:24.001536 tar[1199]: linux-amd64/LICENSE Sep 6 00:22:24.001536 tar[1199]: linux-amd64/helm Sep 6 00:22:23.986487 systemd[1]: Finished motdgen.service. Sep 6 00:22:24.001916 jq[1202]: true Sep 6 00:22:23.989995 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:22:23.990027 systemd[1]: Reached target system-config.target. Sep 6 00:22:24.018615 extend-filesystems[1180]: Resized partition /dev/vda9 Sep 6 00:22:23.991018 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:22:23.991038 systemd[1]: Reached target user-config.target. Sep 6 00:22:24.021443 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:22:24.094571 update_engine[1191]: I0906 00:22:24.094077 1191 main.cc:92] Flatcar Update Engine starting Sep 6 00:22:24.114718 systemd[1]: Started update-engine.service. Sep 6 00:22:24.115564 update_engine[1191]: I0906 00:22:24.115283 1191 update_check_scheduler.cc:74] Next update check in 6m55s Sep 6 00:22:24.118450 systemd[1]: Started locksmithd.service. Sep 6 00:22:24.131042 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:24.131081 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:22:24.148027 env[1204]: time="2025-09-06T00:22:24.147647631Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:22:24.149828 systemd-logind[1188]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:22:24.149849 systemd-logind[1188]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:22:24.150301 systemd-logind[1188]: New seat seat0. Sep 6 00:22:24.152825 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:22:24.155362 systemd[1]: Started systemd-logind.service. Sep 6 00:22:24.160015 bash[1231]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:22:24.159224 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:22:24.171753 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:22:24.173680 env[1204]: time="2025-09-06T00:22:24.173521459Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:22:24.204806 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:22:24.204806 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:22:24.204806 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:22:24.200592 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.199956890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201426656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201455881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201643872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201658717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201670647Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201680060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.201759053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.202004141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:22:24.208814 env[1204]: time="2025-09-06T00:22:24.202114472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:22:24.209184 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Sep 6 00:22:24.200747 systemd[1]: Finished extend-filesystems.service. Sep 6 00:22:24.210203 env[1204]: time="2025-09-06T00:22:24.202127203Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:22:24.210203 env[1204]: time="2025-09-06T00:22:24.202168102Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:22:24.210203 env[1204]: time="2025-09-06T00:22:24.202180201Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:22:24.210889 env[1204]: time="2025-09-06T00:22:24.210854993Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:22:24.210939 env[1204]: time="2025-09-06T00:22:24.210889472Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:22:24.210939 env[1204]: time="2025-09-06T00:22:24.210905502Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:22:24.210982 env[1204]: time="2025-09-06T00:22:24.210953275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.210982 env[1204]: time="2025-09-06T00:22:24.210967566Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211021 env[1204]: time="2025-09-06T00:22:24.210981552Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211021 env[1204]: time="2025-09-06T00:22:24.210995932Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211021 env[1204]: time="2025-09-06T00:22:24.211008910Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211022253Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211035577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211047300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211061019Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211163410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:22:24.211342 env[1204]: time="2025-09-06T00:22:24.211267469Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:22:24.211550 env[1204]: time="2025-09-06T00:22:24.211523560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:22:24.211596 env[1204]: time="2025-09-06T00:22:24.211559382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211596 env[1204]: time="2025-09-06T00:22:24.211573773Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:22:24.211668 env[1204]: time="2025-09-06T00:22:24.211640173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211668 env[1204]: time="2025-09-06T00:22:24.211660883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211675521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211689605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211700824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211722060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211743471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211753398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211782 env[1204]: time="2025-09-06T00:22:24.211765082Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:22:24.211943 env[1204]: time="2025-09-06T00:22:24.211890999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211943 env[1204]: time="2025-09-06T00:22:24.211905813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211943 env[1204]: time="2025-09-06T00:22:24.211916332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.211943 env[1204]: time="2025-09-06T00:22:24.211928885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:22:24.211943 env[1204]: time="2025-09-06T00:22:24.211940954Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:22:24.212043 env[1204]: time="2025-09-06T00:22:24.211950940Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:22:24.212043 env[1204]: time="2025-09-06T00:22:24.211981656Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:22:24.212043 env[1204]: time="2025-09-06T00:22:24.212022031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:22:24.212310 env[1204]: time="2025-09-06T00:22:24.212250932Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:22:24.213338 env[1204]: time="2025-09-06T00:22:24.212315969Z" level=info msg="Connect containerd service" Sep 6 00:22:24.213338 env[1204]: time="2025-09-06T00:22:24.212363861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:22:24.213338 env[1204]: time="2025-09-06T00:22:24.213070482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:22:24.213338 env[1204]: time="2025-09-06T00:22:24.213298206Z" level=info msg="Start subscribing containerd event" Sep 6 00:22:24.213433 env[1204]: time="2025-09-06T00:22:24.213365733Z" level=info msg="Start recovering state" Sep 6 00:22:24.213433 env[1204]: time="2025-09-06T00:22:24.213427026Z" level=info msg="Start event monitor" Sep 6 00:22:24.213478 env[1204]: time="2025-09-06T00:22:24.213443185Z" level=info msg="Start snapshots syncer" Sep 6 00:22:24.213478 env[1204]: time="2025-09-06T00:22:24.213453091Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:22:24.213478 env[1204]: time="2025-09-06T00:22:24.213459531Z" level=info msg="Start streaming server" Sep 6 00:22:24.216413 env[1204]: time="2025-09-06T00:22:24.216326292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:22:24.217442 env[1204]: time="2025-09-06T00:22:24.216436929Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:22:24.216636 systemd[1]: Started containerd.service. Sep 6 00:22:24.219050 env[1204]: time="2025-09-06T00:22:24.219019827Z" level=info msg="containerd successfully booted in 0.085910s" Sep 6 00:22:24.325260 locksmithd[1223]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:22:24.413569 sshd_keygen[1203]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:22:24.436810 systemd[1]: Finished sshd-keygen.service. Sep 6 00:22:24.439284 systemd[1]: Starting issuegen.service... Sep 6 00:22:24.444509 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:22:24.444677 systemd[1]: Finished issuegen.service. Sep 6 00:22:24.447084 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:22:24.454673 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:22:24.457194 systemd[1]: Started getty@tty1.service. Sep 6 00:22:24.459526 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:22:24.460621 systemd[1]: Reached target getty.target. Sep 6 00:22:24.520937 systemd-networkd[1025]: eth0: Gained IPv6LL Sep 6 00:22:24.523147 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:22:24.524744 systemd[1]: Reached target network-online.target. Sep 6 00:22:24.527094 systemd[1]: Starting kubelet.service... Sep 6 00:22:24.627524 systemd[1]: Created slice system-sshd.slice. Sep 6 00:22:24.630214 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:32776.service. Sep 6 00:22:24.673839 tar[1199]: linux-amd64/README.md Sep 6 00:22:24.678799 systemd[1]: Finished prepare-helm.service. Sep 6 00:22:24.692136 sshd[1258]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:24.694028 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:24.703504 systemd-logind[1188]: New session 1 of user core. Sep 6 00:22:24.704493 systemd[1]: Created slice user-500.slice. Sep 6 00:22:24.706632 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:22:24.717526 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:22:24.720052 systemd[1]: Starting user@500.service... Sep 6 00:22:24.723836 (systemd)[1262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:24.798251 systemd[1262]: Queued start job for default target default.target. Sep 6 00:22:24.798762 systemd[1262]: Reached target paths.target. Sep 6 00:22:24.798781 systemd[1262]: Reached target sockets.target. Sep 6 00:22:24.798793 systemd[1262]: Reached target timers.target. Sep 6 00:22:24.798804 systemd[1262]: Reached target basic.target. Sep 6 00:22:24.798910 systemd[1]: Started user@500.service. Sep 6 00:22:24.799182 systemd[1262]: Reached target default.target. Sep 6 00:22:24.799214 systemd[1262]: Startup finished in 67ms. Sep 6 00:22:24.800872 systemd[1]: Started session-1.scope. Sep 6 00:22:24.910140 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:32792.service. Sep 6 00:22:25.003024 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 32792 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:25.004587 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:25.008270 systemd-logind[1188]: New session 2 of user core. Sep 6 00:22:25.009125 systemd[1]: Started session-2.scope. Sep 6 00:22:25.065382 sshd[1271]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:25.068232 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:32792.service: Deactivated successfully. Sep 6 00:22:25.068782 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:22:25.069253 systemd-logind[1188]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:22:25.070422 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:32794.service. Sep 6 00:22:25.072275 systemd-logind[1188]: Removed session 2. Sep 6 00:22:25.109782 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 32794 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:25.111065 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:25.114483 systemd-logind[1188]: New session 3 of user core. Sep 6 00:22:25.115270 systemd[1]: Started session-3.scope. Sep 6 00:22:25.187138 sshd[1277]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:25.189297 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:32794.service: Deactivated successfully. Sep 6 00:22:25.190116 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:22:25.190553 systemd-logind[1188]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:22:25.191230 systemd-logind[1188]: Removed session 3. Sep 6 00:22:25.868525 systemd[1]: Started kubelet.service. Sep 6 00:22:25.870150 systemd[1]: Reached target multi-user.target. Sep 6 00:22:25.872750 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:22:25.880846 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:22:25.881087 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:22:25.882431 systemd[1]: Startup finished in 818ms (kernel) + 6.024s (initrd) + 7.986s (userspace) = 14.829s. Sep 6 00:22:26.525467 kubelet[1284]: E0906 00:22:26.525379 1284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:22:26.527288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:22:26.527429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:22:26.527675 systemd[1]: kubelet.service: Consumed 1.889s CPU time. Sep 6 00:22:35.121546 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:51196.service. Sep 6 00:22:35.170066 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 51196 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:35.171634 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:35.177205 systemd-logind[1188]: New session 4 of user core. Sep 6 00:22:35.177999 systemd[1]: Started session-4.scope. Sep 6 00:22:35.232189 sshd[1293]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:35.235628 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:51196.service: Deactivated successfully. Sep 6 00:22:35.236270 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:22:35.236796 systemd-logind[1188]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:22:35.238074 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:51206.service. Sep 6 00:22:35.238843 systemd-logind[1188]: Removed session 4. Sep 6 00:22:35.275681 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 51206 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:35.276747 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:35.280229 systemd-logind[1188]: New session 5 of user core. Sep 6 00:22:35.281094 systemd[1]: Started session-5.scope. Sep 6 00:22:35.329014 sshd[1299]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:35.331933 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:51206.service: Deactivated successfully. Sep 6 00:22:35.332515 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:22:35.333058 systemd-logind[1188]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:22:35.334154 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:51216.service. Sep 6 00:22:35.334801 systemd-logind[1188]: Removed session 5. Sep 6 00:22:35.370285 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:35.371334 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:35.374314 systemd-logind[1188]: New session 6 of user core. Sep 6 00:22:35.375051 systemd[1]: Started session-6.scope. Sep 6 00:22:35.427600 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:35.429831 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:51216.service: Deactivated successfully. Sep 6 00:22:35.430336 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:22:35.430804 systemd-logind[1188]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:22:35.431702 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:51224.service. Sep 6 00:22:35.432311 systemd-logind[1188]: Removed session 6. Sep 6 00:22:35.467185 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 51224 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:35.468209 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:35.471250 systemd-logind[1188]: New session 7 of user core. Sep 6 00:22:35.471980 systemd[1]: Started session-7.scope. Sep 6 00:22:35.526859 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:22:35.527075 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:22:35.574482 systemd[1]: Starting docker.service... Sep 6 00:22:35.621017 env[1326]: time="2025-09-06T00:22:35.620945558Z" level=info msg="Starting up" Sep 6 00:22:35.622503 env[1326]: time="2025-09-06T00:22:35.622451875Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:22:35.622503 env[1326]: time="2025-09-06T00:22:35.622482413Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:22:35.622503 env[1326]: time="2025-09-06T00:22:35.622507098Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:22:35.622755 env[1326]: time="2025-09-06T00:22:35.622520260Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:22:35.626612 env[1326]: time="2025-09-06T00:22:35.625835681Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:22:35.626612 env[1326]: time="2025-09-06T00:22:35.625854655Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:22:35.626612 env[1326]: time="2025-09-06T00:22:35.625876304Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:22:35.626612 env[1326]: time="2025-09-06T00:22:35.625893809Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:22:35.657781 env[1326]: time="2025-09-06T00:22:35.657740825Z" level=info msg="Loading containers: start." Sep 6 00:22:35.779754 kernel: Initializing XFRM netlink socket Sep 6 00:22:35.890148 env[1326]: time="2025-09-06T00:22:35.890023659Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:22:35.937078 systemd-networkd[1025]: docker0: Link UP Sep 6 00:22:35.956025 env[1326]: time="2025-09-06T00:22:35.955978971Z" level=info msg="Loading containers: done." Sep 6 00:22:35.970951 env[1326]: time="2025-09-06T00:22:35.970907168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:22:35.973864 env[1326]: time="2025-09-06T00:22:35.973833283Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:22:35.973976 env[1326]: time="2025-09-06T00:22:35.973954204Z" level=info msg="Daemon has completed initialization" Sep 6 00:22:35.992814 systemd[1]: Started docker.service. Sep 6 00:22:36.002695 env[1326]: time="2025-09-06T00:22:36.002629476Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:22:36.545078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:22:36.545273 systemd[1]: Stopped kubelet.service. Sep 6 00:22:36.545331 systemd[1]: kubelet.service: Consumed 1.889s CPU time. Sep 6 00:22:36.546779 systemd[1]: Starting kubelet.service... Sep 6 00:22:36.722032 systemd[1]: Started kubelet.service. Sep 6 00:22:36.884321 kubelet[1458]: E0906 00:22:36.884161 1458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:22:36.886984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:22:36.887108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:22:36.945343 env[1204]: time="2025-09-06T00:22:36.945221523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 6 00:22:38.068838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228024550.mount: Deactivated successfully. Sep 6 00:22:39.925955 env[1204]: time="2025-09-06T00:22:39.925884834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:39.928407 env[1204]: time="2025-09-06T00:22:39.928344127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:39.930205 env[1204]: time="2025-09-06T00:22:39.930174073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:39.932036 env[1204]: time="2025-09-06T00:22:39.931998261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:39.932901 env[1204]: time="2025-09-06T00:22:39.932860157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 6 00:22:39.933870 env[1204]: time="2025-09-06T00:22:39.933830449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 6 00:22:41.571999 env[1204]: time="2025-09-06T00:22:41.571919994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:41.576132 env[1204]: time="2025-09-06T00:22:41.576075537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:41.579040 env[1204]: time="2025-09-06T00:22:41.578990241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:41.580963 env[1204]: time="2025-09-06T00:22:41.580884837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:41.581905 env[1204]: time="2025-09-06T00:22:41.581846130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 6 00:22:41.582600 env[1204]: time="2025-09-06T00:22:41.582569177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 6 00:22:44.076102 env[1204]: time="2025-09-06T00:22:44.076036587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:44.078002 env[1204]: time="2025-09-06T00:22:44.077929480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:44.079896 env[1204]: time="2025-09-06T00:22:44.079827507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:44.081619 env[1204]: time="2025-09-06T00:22:44.081587630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:44.082297 env[1204]: time="2025-09-06T00:22:44.082249262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 6 00:22:44.082956 env[1204]: time="2025-09-06T00:22:44.082907410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 00:22:45.454205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240893066.mount: Deactivated successfully. Sep 6 00:22:46.064807 env[1204]: time="2025-09-06T00:22:46.064738577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.066402 env[1204]: time="2025-09-06T00:22:46.066373805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.068055 env[1204]: time="2025-09-06T00:22:46.068021858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.069352 env[1204]: time="2025-09-06T00:22:46.069305671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:46.069590 env[1204]: time="2025-09-06T00:22:46.069552028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 6 00:22:46.070166 env[1204]: time="2025-09-06T00:22:46.070131160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:22:46.595160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329377175.mount: Deactivated successfully. Sep 6 00:22:47.045238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:22:47.045526 systemd[1]: Stopped kubelet.service. Sep 6 00:22:47.047411 systemd[1]: Starting kubelet.service... Sep 6 00:22:47.218817 systemd[1]: Started kubelet.service. Sep 6 00:22:47.670295 kubelet[1472]: E0906 00:22:47.670218 1472 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:22:47.672337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:22:47.672477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:22:48.219389 env[1204]: time="2025-09-06T00:22:48.219331608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.223876 env[1204]: time="2025-09-06T00:22:48.223812705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.225593 env[1204]: time="2025-09-06T00:22:48.225533903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.227779 env[1204]: time="2025-09-06T00:22:48.227736347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.228463 env[1204]: time="2025-09-06T00:22:48.228427205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:22:48.229047 env[1204]: time="2025-09-06T00:22:48.229018334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:22:48.812076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162311524.mount: Deactivated successfully. Sep 6 00:22:48.816419 env[1204]: time="2025-09-06T00:22:48.816362586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.818163 env[1204]: time="2025-09-06T00:22:48.818108216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.819698 env[1204]: time="2025-09-06T00:22:48.819661316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.821020 env[1204]: time="2025-09-06T00:22:48.820988902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:48.821463 env[1204]: time="2025-09-06T00:22:48.821430947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:22:48.822118 env[1204]: time="2025-09-06T00:22:48.822086519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 6 00:22:49.266235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033200161.mount: Deactivated successfully. Sep 6 00:22:52.743220 env[1204]: time="2025-09-06T00:22:52.743029506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:52.744982 env[1204]: time="2025-09-06T00:22:52.744932883Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:52.746755 env[1204]: time="2025-09-06T00:22:52.746706912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:52.748851 env[1204]: time="2025-09-06T00:22:52.748793152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:52.749564 env[1204]: time="2025-09-06T00:22:52.749525508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 6 00:22:55.021183 systemd[1]: Stopped kubelet.service. Sep 6 00:22:55.024717 systemd[1]: Starting kubelet.service... Sep 6 00:22:55.051645 systemd[1]: Reloading. Sep 6 00:22:55.122207 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-09-06T00:22:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:22:55.124838 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-09-06T00:22:55Z" level=info msg="torcx already run" Sep 6 00:22:55.882039 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:22:55.882069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:22:55.899531 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:22:55.983572 systemd[1]: Stopping kubelet.service... Sep 6 00:22:55.984088 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:22:55.984253 systemd[1]: Stopped kubelet.service. Sep 6 00:22:55.985697 systemd[1]: Starting kubelet.service... Sep 6 00:22:56.094323 systemd[1]: Started kubelet.service. Sep 6 00:22:56.133600 kubelet[1574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:22:56.134046 kubelet[1574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:22:56.134046 kubelet[1574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:22:56.134242 kubelet[1574]: I0906 00:22:56.134108 1574 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:22:56.344769 kubelet[1574]: I0906 00:22:56.344695 1574 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:22:56.344769 kubelet[1574]: I0906 00:22:56.344760 1574 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:22:56.345787 kubelet[1574]: I0906 00:22:56.345749 1574 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:22:56.364941 kubelet[1574]: E0906 00:22:56.364889 1574 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:56.365324 kubelet[1574]: I0906 00:22:56.365274 1574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:22:56.373017 kubelet[1574]: E0906 00:22:56.372984 1574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:22:56.373017 kubelet[1574]: I0906 00:22:56.373017 1574 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:22:56.377876 kubelet[1574]: I0906 00:22:56.377852 1574 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:22:56.378957 kubelet[1574]: I0906 00:22:56.378918 1574 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:22:56.379182 kubelet[1574]: I0906 00:22:56.378956 1574 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:22:56.379352 kubelet[1574]: I0906 00:22:56.379198 1574 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:22:56.379352 kubelet[1574]: I0906 00:22:56.379209 1574 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:22:56.379406 kubelet[1574]: I0906 00:22:56.379385 1574 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:22:56.382009 kubelet[1574]: I0906 00:22:56.381991 1574 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:22:56.382049 kubelet[1574]: I0906 00:22:56.382030 1574 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:22:56.386153 kubelet[1574]: I0906 00:22:56.385280 1574 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:22:56.386352 kubelet[1574]: I0906 00:22:56.386248 1574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:22:56.391744 kubelet[1574]: W0906 00:22:56.391689 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:56.391807 kubelet[1574]: E0906 00:22:56.391773 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:56.392094 kubelet[1574]: W0906 00:22:56.392060 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:56.392130 kubelet[1574]: E0906 00:22:56.392093 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:56.393947 kubelet[1574]: I0906 00:22:56.393922 1574 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:22:56.394523 kubelet[1574]: I0906 00:22:56.394504 1574 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:22:56.396170 kubelet[1574]: W0906 00:22:56.396146 1574 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:22:56.400245 kubelet[1574]: I0906 00:22:56.400192 1574 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:22:56.400245 kubelet[1574]: I0906 00:22:56.400248 1574 server.go:1287] "Started kubelet" Sep 6 00:22:56.400482 kubelet[1574]: I0906 00:22:56.400437 1574 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:22:56.401494 kubelet[1574]: I0906 00:22:56.401455 1574 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:22:56.407750 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:22:56.407877 kubelet[1574]: I0906 00:22:56.407852 1574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:22:56.410471 kubelet[1574]: I0906 00:22:56.410388 1574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:22:56.410672 kubelet[1574]: I0906 00:22:56.410647 1574 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:22:56.410929 kubelet[1574]: I0906 00:22:56.410898 1574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:22:56.412589 kubelet[1574]: E0906 00:22:56.412562 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:56.412664 kubelet[1574]: I0906 00:22:56.412618 1574 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:22:56.412858 kubelet[1574]: I0906 00:22:56.412841 1574 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:22:56.412945 kubelet[1574]: I0906 00:22:56.412927 1574 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:22:56.413253 kubelet[1574]: W0906 00:22:56.413207 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:56.413253 kubelet[1574]: E0906 00:22:56.413249 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:56.413709 kubelet[1574]: E0906 00:22:56.413671 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Sep 6 00:22:56.413877 kubelet[1574]: I0906 00:22:56.413858 1574 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:22:56.413979 kubelet[1574]: I0906 00:22:56.413956 1574 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:22:56.416825 kubelet[1574]: E0906 00:22:56.416785 1574 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:22:56.416943 kubelet[1574]: I0906 00:22:56.416893 1574 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:22:56.417781 kubelet[1574]: E0906 00:22:56.415067 1574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186289abfcb79d13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:22:56.400219411 +0000 UTC m=+0.301779496,LastTimestamp:2025-09-06 00:22:56.400219411 +0000 UTC m=+0.301779496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:22:56.423287 kubelet[1574]: I0906 00:22:56.423263 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:22:56.424192 kubelet[1574]: I0906 00:22:56.424175 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:22:56.424353 kubelet[1574]: I0906 00:22:56.424316 1574 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:22:56.424353 kubelet[1574]: I0906 00:22:56.424358 1574 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:22:56.424518 kubelet[1574]: I0906 00:22:56.424367 1574 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:22:56.424518 kubelet[1574]: E0906 00:22:56.424416 1574 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:22:56.428907 kubelet[1574]: W0906 00:22:56.428872 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:56.429032 kubelet[1574]: E0906 00:22:56.429009 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:56.429427 kubelet[1574]: I0906 00:22:56.429399 1574 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:22:56.429511 kubelet[1574]: I0906 00:22:56.429495 1574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:22:56.429603 kubelet[1574]: I0906 00:22:56.429588 1574 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:22:56.513367 kubelet[1574]: E0906 00:22:56.513301 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:56.524686 kubelet[1574]: E0906 00:22:56.524637 1574 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:22:56.613799 kubelet[1574]: E0906 00:22:56.613724 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:56.614443 kubelet[1574]: E0906 00:22:56.614077 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Sep 6 00:22:56.714773 kubelet[1574]: E0906 00:22:56.714611 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:56.724769 kubelet[1574]: E0906 00:22:56.724742 1574 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:22:56.815271 kubelet[1574]: E0906 00:22:56.815223 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:56.916254 kubelet[1574]: E0906 00:22:56.916216 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:57.015118 kubelet[1574]: E0906 00:22:57.015028 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Sep 6 00:22:57.017073 kubelet[1574]: E0906 00:22:57.017044 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:57.117645 kubelet[1574]: E0906 00:22:57.117606 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:57.125851 kubelet[1574]: E0906 00:22:57.125801 1574 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:22:57.218339 kubelet[1574]: E0906 00:22:57.218277 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:57.319100 kubelet[1574]: E0906 00:22:57.318998 1574 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:22:57.341586 kubelet[1574]: I0906 00:22:57.341537 1574 policy_none.go:49] "None policy: Start" Sep 6 00:22:57.341586 kubelet[1574]: I0906 00:22:57.341583 1574 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:22:57.341802 kubelet[1574]: I0906 00:22:57.341605 1574 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:22:57.348333 systemd[1]: Created slice kubepods.slice. Sep 6 00:22:57.353132 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:22:57.356131 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:22:57.362507 kubelet[1574]: I0906 00:22:57.362462 1574 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:22:57.362681 kubelet[1574]: I0906 00:22:57.362652 1574 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:22:57.362783 kubelet[1574]: I0906 00:22:57.362672 1574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:22:57.363429 kubelet[1574]: I0906 00:22:57.362950 1574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:22:57.363664 kubelet[1574]: E0906 00:22:57.363642 1574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:22:57.363746 kubelet[1574]: E0906 00:22:57.363692 1574 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:22:57.464170 kubelet[1574]: I0906 00:22:57.464107 1574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:22:57.464410 kubelet[1574]: E0906 00:22:57.464381 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 6 00:22:57.589889 kubelet[1574]: W0906 00:22:57.589750 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:57.589889 kubelet[1574]: E0906 00:22:57.589807 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:57.608685 kubelet[1574]: W0906 00:22:57.608622 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:57.608685 kubelet[1574]: E0906 00:22:57.608682 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:57.666755 kubelet[1574]: I0906 00:22:57.666686 1574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:22:57.667240 kubelet[1574]: E0906 00:22:57.667183 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 6 00:22:57.709937 kubelet[1574]: W0906 00:22:57.709877 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:57.709937 kubelet[1574]: E0906 00:22:57.709929 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:57.816597 kubelet[1574]: E0906 00:22:57.816526 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Sep 6 00:22:57.849414 kubelet[1574]: W0906 00:22:57.849304 1574 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused Sep 6 00:22:57.849414 kubelet[1574]: E0906 00:22:57.849365 1574 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:57.934238 systemd[1]: Created slice kubepods-burstable-pod780c9da0af72b09e1a4c9c9ae3d68f89.slice. Sep 6 00:22:57.943508 kubelet[1574]: E0906 00:22:57.943458 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:57.945616 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 6 00:22:57.947379 kubelet[1574]: E0906 00:22:57.947356 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:57.949531 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 6 00:22:57.950908 kubelet[1574]: E0906 00:22:57.950885 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:58.022426 kubelet[1574]: I0906 00:22:58.022365 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:22:58.022426 kubelet[1574]: I0906 00:22:58.022415 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:22:58.022656 kubelet[1574]: I0906 00:22:58.022444 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:22:58.022656 kubelet[1574]: I0906 00:22:58.022469 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:22:58.022656 kubelet[1574]: I0906 00:22:58.022505 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:22:58.022656 kubelet[1574]: I0906 00:22:58.022522 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:22:58.022656 kubelet[1574]: I0906 00:22:58.022538 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:22:58.022839 kubelet[1574]: I0906 00:22:58.022559 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:22:58.022839 kubelet[1574]: I0906 00:22:58.022577 1574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:22:58.068834 kubelet[1574]: I0906 00:22:58.068786 1574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:22:58.069195 kubelet[1574]: E0906 00:22:58.069168 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 6 00:22:58.244617 kubelet[1574]: E0906 00:22:58.244483 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:58.245374 env[1204]: time="2025-09-06T00:22:58.245313787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:780c9da0af72b09e1a4c9c9ae3d68f89,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:58.247974 kubelet[1574]: E0906 00:22:58.247947 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:58.248300 env[1204]: time="2025-09-06T00:22:58.248271096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:58.251515 kubelet[1574]: E0906 00:22:58.251493 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:58.251778 env[1204]: time="2025-09-06T00:22:58.251754850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:58.565079 kubelet[1574]: E0906 00:22:58.565044 1574 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:22:58.871407 kubelet[1574]: I0906 00:22:58.871311 1574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:22:58.871753 kubelet[1574]: E0906 00:22:58.871713 1574 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 6 00:22:59.016371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014217106.mount: Deactivated successfully. Sep 6 00:22:59.022253 env[1204]: time="2025-09-06T00:22:59.022195144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.025827 env[1204]: time="2025-09-06T00:22:59.025787100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.026859 env[1204]: time="2025-09-06T00:22:59.026826958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.027630 env[1204]: time="2025-09-06T00:22:59.027602775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.030464 env[1204]: time="2025-09-06T00:22:59.030416409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.031865 env[1204]: time="2025-09-06T00:22:59.031838910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.036378 env[1204]: time="2025-09-06T00:22:59.036333765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.040849 env[1204]: time="2025-09-06T00:22:59.040781289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.042154 env[1204]: time="2025-09-06T00:22:59.042118059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.045127 env[1204]: time="2025-09-06T00:22:59.045096033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.047796 env[1204]: time="2025-09-06T00:22:59.047745916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.053503 env[1204]: time="2025-09-06T00:22:59.053443917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086783296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086822710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086832689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086964619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e52e48b30a86774e52df9e1eee4d567e212b3d6899f5fed70694ce7f00a9bcd pid=1630 runtime=io.containerd.runc.v2 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086714645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086787314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:59.087003 env[1204]: time="2025-09-06T00:22:59.086800408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:59.087344 env[1204]: time="2025-09-06T00:22:59.087108601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:59.087344 env[1204]: time="2025-09-06T00:22:59.087191087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:59.087344 env[1204]: time="2025-09-06T00:22:59.087210765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:59.087463 env[1204]: time="2025-09-06T00:22:59.087416914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/badd1811d6fc882f0b162b08ba8efb6a59fca6f966b9638131b4f8832fb739d7 pid=1627 runtime=io.containerd.runc.v2 Sep 6 00:22:59.087776 env[1204]: time="2025-09-06T00:22:59.087668710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0697711fcd3db9a4b34fc5c52225fa6be1a37d6250f63b12af0f2143c1bb0d6f pid=1634 runtime=io.containerd.runc.v2 Sep 6 00:22:59.101236 systemd[1]: Started cri-containerd-0697711fcd3db9a4b34fc5c52225fa6be1a37d6250f63b12af0f2143c1bb0d6f.scope. Sep 6 00:22:59.106599 systemd[1]: Started cri-containerd-badd1811d6fc882f0b162b08ba8efb6a59fca6f966b9638131b4f8832fb739d7.scope. Sep 6 00:22:59.110337 systemd[1]: Started cri-containerd-6e52e48b30a86774e52df9e1eee4d567e212b3d6899f5fed70694ce7f00a9bcd.scope. Sep 6 00:22:59.144640 env[1204]: time="2025-09-06T00:22:59.144477493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:780c9da0af72b09e1a4c9c9ae3d68f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"0697711fcd3db9a4b34fc5c52225fa6be1a37d6250f63b12af0f2143c1bb0d6f\"" Sep 6 00:22:59.146458 kubelet[1574]: E0906 00:22:59.146421 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:59.150110 env[1204]: time="2025-09-06T00:22:59.150050898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e52e48b30a86774e52df9e1eee4d567e212b3d6899f5fed70694ce7f00a9bcd\"" Sep 6 00:22:59.150331 env[1204]: time="2025-09-06T00:22:59.150285542Z" level=info msg="CreateContainer within sandbox \"0697711fcd3db9a4b34fc5c52225fa6be1a37d6250f63b12af0f2143c1bb0d6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:22:59.150606 kubelet[1574]: E0906 00:22:59.150571 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:59.151847 env[1204]: time="2025-09-06T00:22:59.151795329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"badd1811d6fc882f0b162b08ba8efb6a59fca6f966b9638131b4f8832fb739d7\"" Sep 6 00:22:59.152466 kubelet[1574]: E0906 00:22:59.152446 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:59.153177 env[1204]: time="2025-09-06T00:22:59.153126778Z" level=info msg="CreateContainer within sandbox \"6e52e48b30a86774e52df9e1eee4d567e212b3d6899f5fed70694ce7f00a9bcd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:22:59.154319 env[1204]: time="2025-09-06T00:22:59.154268878Z" level=info msg="CreateContainer within sandbox \"badd1811d6fc882f0b162b08ba8efb6a59fca6f966b9638131b4f8832fb739d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:22:59.181204 env[1204]: time="2025-09-06T00:22:59.181126764Z" level=info msg="CreateContainer within sandbox \"0697711fcd3db9a4b34fc5c52225fa6be1a37d6250f63b12af0f2143c1bb0d6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6927ec2288982ed7a270824ae4b22fb4e27168585365ef455627021b93d7ac4a\"" Sep 6 00:22:59.181785 env[1204]: time="2025-09-06T00:22:59.181749412Z" level=info msg="StartContainer for \"6927ec2288982ed7a270824ae4b22fb4e27168585365ef455627021b93d7ac4a\"" Sep 6 00:22:59.185761 env[1204]: time="2025-09-06T00:22:59.185697572Z" level=info msg="CreateContainer within sandbox \"badd1811d6fc882f0b162b08ba8efb6a59fca6f966b9638131b4f8832fb739d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"374b8782dd077f0d29cb8b5d8752174a1d8b67a2a5865fdd563f37f2b8858b96\"" Sep 6 00:22:59.186280 env[1204]: time="2025-09-06T00:22:59.186257552Z" level=info msg="StartContainer for \"374b8782dd077f0d29cb8b5d8752174a1d8b67a2a5865fdd563f37f2b8858b96\"" Sep 6 00:22:59.187244 env[1204]: time="2025-09-06T00:22:59.187184335Z" level=info msg="CreateContainer within sandbox \"6e52e48b30a86774e52df9e1eee4d567e212b3d6899f5fed70694ce7f00a9bcd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c55a08b5900abb68b8088c4a07056b746d2e4a3256f3228c8f4e8bc38ae94f8\"" Sep 6 00:22:59.187779 env[1204]: time="2025-09-06T00:22:59.187752029Z" level=info msg="StartContainer for \"9c55a08b5900abb68b8088c4a07056b746d2e4a3256f3228c8f4e8bc38ae94f8\"" Sep 6 00:22:59.194758 systemd[1]: Started cri-containerd-6927ec2288982ed7a270824ae4b22fb4e27168585365ef455627021b93d7ac4a.scope. Sep 6 00:22:59.203295 systemd[1]: Started cri-containerd-374b8782dd077f0d29cb8b5d8752174a1d8b67a2a5865fdd563f37f2b8858b96.scope. Sep 6 00:22:59.209308 systemd[1]: Started cri-containerd-9c55a08b5900abb68b8088c4a07056b746d2e4a3256f3228c8f4e8bc38ae94f8.scope. Sep 6 00:22:59.249539 env[1204]: time="2025-09-06T00:22:59.249482543Z" level=info msg="StartContainer for \"374b8782dd077f0d29cb8b5d8752174a1d8b67a2a5865fdd563f37f2b8858b96\" returns successfully" Sep 6 00:22:59.253942 env[1204]: time="2025-09-06T00:22:59.253892207Z" level=info msg="StartContainer for \"9c55a08b5900abb68b8088c4a07056b746d2e4a3256f3228c8f4e8bc38ae94f8\" returns successfully" Sep 6 00:22:59.256561 env[1204]: time="2025-09-06T00:22:59.256519347Z" level=info msg="StartContainer for \"6927ec2288982ed7a270824ae4b22fb4e27168585365ef455627021b93d7ac4a\" returns successfully" Sep 6 00:22:59.435334 kubelet[1574]: E0906 00:22:59.434378 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:59.436091 kubelet[1574]: E0906 00:22:59.436070 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:59.438798 kubelet[1574]: E0906 00:22:59.438774 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:59.439106 kubelet[1574]: E0906 00:22:59.439087 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:59.441047 kubelet[1574]: E0906 00:22:59.441025 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:22:59.441340 kubelet[1574]: E0906 00:22:59.441321 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:00.443238 kubelet[1574]: E0906 00:23:00.443206 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:23:00.446949 kubelet[1574]: E0906 00:23:00.446929 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:00.447120 kubelet[1574]: E0906 00:23:00.443740 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:23:00.447281 kubelet[1574]: E0906 00:23:00.447267 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:00.448117 kubelet[1574]: E0906 00:23:00.448078 1574 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:23:00.448197 kubelet[1574]: E0906 00:23:00.448176 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:00.473840 kubelet[1574]: I0906 00:23:00.473803 1574 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:23:00.716876 kubelet[1574]: E0906 00:23:00.714380 1574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:23:00.844858 kubelet[1574]: E0906 00:23:00.844718 1574 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186289abfcb79d13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:22:56.400219411 +0000 UTC m=+0.301779496,LastTimestamp:2025-09-06 00:22:56.400219411 +0000 UTC m=+0.301779496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:23:00.894260 kubelet[1574]: I0906 00:23:00.894192 1574 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:23:00.913957 kubelet[1574]: I0906 00:23:00.913908 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:00.976865 kubelet[1574]: E0906 00:23:00.976573 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:00.976865 kubelet[1574]: I0906 00:23:00.976608 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:00.978676 kubelet[1574]: E0906 00:23:00.978506 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:00.978676 kubelet[1574]: I0906 00:23:00.978532 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:00.979658 kubelet[1574]: E0906 00:23:00.979617 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:01.394954 kubelet[1574]: I0906 00:23:01.394877 1574 apiserver.go:52] "Watching apiserver" Sep 6 00:23:01.413219 kubelet[1574]: I0906 00:23:01.413157 1574 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:23:01.443822 kubelet[1574]: I0906 00:23:01.443787 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:01.445806 kubelet[1574]: E0906 00:23:01.445768 1574 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:01.446012 kubelet[1574]: E0906 00:23:01.445897 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:02.270325 kubelet[1574]: I0906 00:23:02.270280 1574 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:02.314215 kubelet[1574]: E0906 00:23:02.314188 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:02.445349 kubelet[1574]: E0906 00:23:02.445294 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:03.394615 systemd[1]: Reloading. Sep 6 00:23:03.512195 /usr/lib/systemd/system-generators/torcx-generator[1871]: time="2025-09-06T00:23:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:23:03.512223 /usr/lib/systemd/system-generators/torcx-generator[1871]: time="2025-09-06T00:23:03Z" level=info msg="torcx already run" Sep 6 00:23:03.578768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:23:03.578784 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:23:03.595483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:23:03.695451 kubelet[1574]: I0906 00:23:03.695348 1574 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:23:03.695545 systemd[1]: Stopping kubelet.service... Sep 6 00:23:03.720281 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:23:03.720502 systemd[1]: Stopped kubelet.service. Sep 6 00:23:03.722206 systemd[1]: Starting kubelet.service... Sep 6 00:23:03.820374 systemd[1]: Started kubelet.service. Sep 6 00:23:03.865423 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:03.865423 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:23:03.865423 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:23:03.865849 kubelet[1916]: I0906 00:23:03.865458 1916 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:23:03.870933 kubelet[1916]: I0906 00:23:03.870903 1916 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:23:03.870933 kubelet[1916]: I0906 00:23:03.870923 1916 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:23:03.871140 kubelet[1916]: I0906 00:23:03.871117 1916 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:23:03.872089 kubelet[1916]: I0906 00:23:03.872060 1916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:23:03.877193 kubelet[1916]: I0906 00:23:03.877155 1916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:23:03.880616 kubelet[1916]: E0906 00:23:03.880575 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:23:03.880616 kubelet[1916]: I0906 00:23:03.880606 1916 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:23:03.884935 kubelet[1916]: I0906 00:23:03.884898 1916 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:23:03.885179 kubelet[1916]: I0906 00:23:03.885131 1916 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:23:03.885337 kubelet[1916]: I0906 00:23:03.885170 1916 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:23:03.885337 kubelet[1916]: I0906 00:23:03.885335 1916 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:23:03.885454 kubelet[1916]: I0906 00:23:03.885344 1916 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:23:03.885454 kubelet[1916]: I0906 00:23:03.885385 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:03.885501 kubelet[1916]: I0906 00:23:03.885494 1916 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:23:03.885526 kubelet[1916]: I0906 00:23:03.885510 1916 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:23:03.885569 kubelet[1916]: I0906 00:23:03.885528 1916 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:23:03.885569 kubelet[1916]: I0906 00:23:03.885537 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:23:03.886394 kubelet[1916]: I0906 00:23:03.886374 1916 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:23:03.886696 kubelet[1916]: I0906 00:23:03.886669 1916 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:23:03.887104 kubelet[1916]: I0906 00:23:03.887087 1916 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:23:03.887104 kubelet[1916]: I0906 00:23:03.887113 1916 server.go:1287] "Started kubelet" Sep 6 00:23:03.888634 kubelet[1916]: I0906 00:23:03.888617 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:23:03.893563 kubelet[1916]: I0906 00:23:03.893387 1916 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:23:03.893897 kubelet[1916]: I0906 00:23:03.893866 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:23:03.895833 kubelet[1916]: E0906 00:23:03.895348 1916 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:23:03.895833 kubelet[1916]: I0906 00:23:03.895721 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:23:03.896036 kubelet[1916]: I0906 00:23:03.896017 1916 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:23:03.899344 kubelet[1916]: I0906 00:23:03.899318 1916 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:23:03.899567 kubelet[1916]: I0906 00:23:03.899537 1916 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:23:03.899661 kubelet[1916]: E0906 00:23:03.899633 1916 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:23:03.900518 kubelet[1916]: I0906 00:23:03.900179 1916 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:23:03.900518 kubelet[1916]: I0906 00:23:03.900446 1916 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:23:03.904663 kubelet[1916]: I0906 00:23:03.904632 1916 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:23:03.904836 kubelet[1916]: I0906 00:23:03.904768 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:23:03.905867 kubelet[1916]: I0906 00:23:03.905832 1916 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:23:03.912266 kubelet[1916]: I0906 00:23:03.912224 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:23:03.913387 kubelet[1916]: I0906 00:23:03.913364 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:23:03.913451 kubelet[1916]: I0906 00:23:03.913411 1916 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:23:03.913451 kubelet[1916]: I0906 00:23:03.913436 1916 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:23:03.913451 kubelet[1916]: I0906 00:23:03.913442 1916 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:23:03.913552 kubelet[1916]: E0906 00:23:03.913488 1916 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:23:03.933634 kubelet[1916]: I0906 00:23:03.933610 1916 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:23:03.933634 kubelet[1916]: I0906 00:23:03.933625 1916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:23:03.933758 kubelet[1916]: I0906 00:23:03.933642 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:23:03.933807 kubelet[1916]: I0906 00:23:03.933791 1916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:23:03.933838 kubelet[1916]: I0906 00:23:03.933804 1916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:23:03.933838 kubelet[1916]: I0906 00:23:03.933822 1916 policy_none.go:49] "None policy: Start" Sep 6 00:23:03.933838 kubelet[1916]: I0906 00:23:03.933831 1916 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:23:03.933838 kubelet[1916]: I0906 00:23:03.933841 1916 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:23:03.933934 kubelet[1916]: I0906 00:23:03.933925 1916 state_mem.go:75] "Updated machine memory state" Sep 6 00:23:03.937413 kubelet[1916]: I0906 00:23:03.937393 1916 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:23:03.937842 kubelet[1916]: I0906 00:23:03.937803 1916 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:23:03.937925 kubelet[1916]: I0906 00:23:03.937820 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:23:03.938206 kubelet[1916]: I0906 00:23:03.938026 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:23:03.938881 kubelet[1916]: E0906 00:23:03.938840 1916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:23:04.014596 kubelet[1916]: I0906 00:23:04.014482 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:04.014596 kubelet[1916]: I0906 00:23:04.014496 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.014596 kubelet[1916]: I0906 00:23:04.014504 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:04.041867 kubelet[1916]: I0906 00:23:04.041849 1916 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:23:04.124426 kubelet[1916]: E0906 00:23:04.124389 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:04.126588 kubelet[1916]: I0906 00:23:04.126567 1916 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 00:23:04.126666 kubelet[1916]: I0906 00:23:04.126628 1916 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:23:04.201594 kubelet[1916]: I0906 00:23:04.201529 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.201594 kubelet[1916]: I0906 00:23:04.201583 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.201785 kubelet[1916]: I0906 00:23:04.201610 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.201785 kubelet[1916]: I0906 00:23:04.201628 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:04.201785 kubelet[1916]: I0906 00:23:04.201711 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:04.201785 kubelet[1916]: I0906 00:23:04.201754 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.201785 kubelet[1916]: I0906 00:23:04.201777 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.202034 kubelet[1916]: I0906 00:23:04.201808 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:04.202034 kubelet[1916]: I0906 00:23:04.201828 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/780c9da0af72b09e1a4c9c9ae3d68f89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"780c9da0af72b09e1a4c9c9ae3d68f89\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:23:04.425477 kubelet[1916]: E0906 00:23:04.425423 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:04.425831 kubelet[1916]: E0906 00:23:04.425420 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:04.425963 kubelet[1916]: E0906 00:23:04.425529 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:04.573856 sudo[1952]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:23:04.574085 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:23:04.886941 kubelet[1916]: I0906 00:23:04.886895 1916 apiserver.go:52] "Watching apiserver" Sep 6 00:23:04.901249 kubelet[1916]: I0906 00:23:04.901212 1916 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:23:04.923478 kubelet[1916]: I0906 00:23:04.923454 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:04.923705 kubelet[1916]: E0906 00:23:04.923455 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:04.924103 kubelet[1916]: I0906 00:23:04.924089 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:05.065304 sudo[1952]: pam_unix(sudo:session): session closed for user root Sep 6 00:23:05.173452 kubelet[1916]: E0906 00:23:05.173229 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:23:05.173691 kubelet[1916]: E0906 00:23:05.173546 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:05.175082 kubelet[1916]: E0906 00:23:05.175044 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:23:05.176946 kubelet[1916]: I0906 00:23:05.175156 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.174999713 podStartE2EDuration="1.174999713s" podCreationTimestamp="2025-09-06 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:05.172940687 +0000 UTC m=+1.347150481" watchObservedRunningTime="2025-09-06 00:23:05.174999713 +0000 UTC m=+1.349209507" Sep 6 00:23:05.177452 kubelet[1916]: E0906 00:23:05.177176 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:05.188396 kubelet[1916]: I0906 00:23:05.188288 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.188267765 podStartE2EDuration="1.188267765s" podCreationTimestamp="2025-09-06 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:05.188173837 +0000 UTC m=+1.362383631" watchObservedRunningTime="2025-09-06 00:23:05.188267765 +0000 UTC m=+1.362477559" Sep 6 00:23:05.195672 kubelet[1916]: I0906 00:23:05.195574 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.195554086 podStartE2EDuration="3.195554086s" podCreationTimestamp="2025-09-06 00:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:05.19532319 +0000 UTC m=+1.369532994" watchObservedRunningTime="2025-09-06 00:23:05.195554086 +0000 UTC m=+1.369763880" Sep 6 00:23:05.924615 kubelet[1916]: E0906 00:23:05.924572 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:05.924983 kubelet[1916]: E0906 00:23:05.924636 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:05.924983 kubelet[1916]: E0906 00:23:05.924898 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:06.926284 kubelet[1916]: E0906 00:23:06.926248 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:07.119157 sudo[1314]: pam_unix(sudo:session): session closed for user root Sep 6 00:23:07.120603 sshd[1311]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:07.122907 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:51224.service: Deactivated successfully. Sep 6 00:23:07.123769 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:23:07.123941 systemd[1]: session-7.scope: Consumed 4.963s CPU time. Sep 6 00:23:07.124361 systemd-logind[1188]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:23:07.125100 systemd-logind[1188]: Removed session 7. Sep 6 00:23:07.950587 kubelet[1916]: I0906 00:23:07.950540 1916 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:23:07.951126 kubelet[1916]: I0906 00:23:07.951073 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:23:07.951168 env[1204]: time="2025-09-06T00:23:07.950867483Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:23:08.721265 systemd[1]: Created slice kubepods-besteffort-pod71c7fb54_c321_424e_9e8c_620356953bc4.slice. Sep 6 00:23:08.733128 kubelet[1916]: I0906 00:23:08.733084 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-bpf-maps\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733128 kubelet[1916]: I0906 00:23:08.733120 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cni-path\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733343 kubelet[1916]: I0906 00:23:08.733140 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-xtables-lock\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733343 kubelet[1916]: I0906 00:23:08.733157 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db610cc0-3792-43d7-9fce-f829377497ca-clustermesh-secrets\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733343 kubelet[1916]: I0906 00:23:08.733172 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71c7fb54-c321-424e-9e8c-620356953bc4-kube-proxy\") pod \"kube-proxy-mdxpz\" (UID: \"71c7fb54-c321-424e-9e8c-620356953bc4\") " pod="kube-system/kube-proxy-mdxpz" Sep 6 00:23:08.733343 kubelet[1916]: I0906 00:23:08.733189 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71c7fb54-c321-424e-9e8c-620356953bc4-xtables-lock\") pod \"kube-proxy-mdxpz\" (UID: \"71c7fb54-c321-424e-9e8c-620356953bc4\") " pod="kube-system/kube-proxy-mdxpz" Sep 6 00:23:08.733343 kubelet[1916]: I0906 00:23:08.733204 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrcl\" (UniqueName: \"kubernetes.io/projected/71c7fb54-c321-424e-9e8c-620356953bc4-kube-api-access-pnrcl\") pod \"kube-proxy-mdxpz\" (UID: \"71c7fb54-c321-424e-9e8c-620356953bc4\") " pod="kube-system/kube-proxy-mdxpz" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733218 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-cgroup\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733363 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c7fb54-c321-424e-9e8c-620356953bc4-lib-modules\") pod \"kube-proxy-mdxpz\" (UID: \"71c7fb54-c321-424e-9e8c-620356953bc4\") " pod="kube-system/kube-proxy-mdxpz" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733381 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-kernel\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733395 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-hubble-tls\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733420 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-lib-modules\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733466 kubelet[1916]: I0906 00:23:08.733437 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db610cc0-3792-43d7-9fce-f829377497ca-cilium-config-path\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733612 kubelet[1916]: I0906 00:23:08.733451 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-net\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733612 kubelet[1916]: I0906 00:23:08.733465 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gr5\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-kube-api-access-47gr5\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733612 kubelet[1916]: I0906 00:23:08.733480 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-etc-cni-netd\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733612 kubelet[1916]: I0906 00:23:08.733494 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-hostproc\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.733612 kubelet[1916]: I0906 00:23:08.733517 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-run\") pod \"cilium-jltsm\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " pod="kube-system/cilium-jltsm" Sep 6 00:23:08.735169 systemd[1]: Created slice kubepods-burstable-poddb610cc0_3792_43d7_9fce_f829377497ca.slice. Sep 6 00:23:08.835149 kubelet[1916]: I0906 00:23:08.835111 1916 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:23:09.029425 systemd[1]: Created slice kubepods-besteffort-pod96a05f9b_ea4d_4afc_ae6b_20fca30dad74.slice. Sep 6 00:23:09.031369 kubelet[1916]: E0906 00:23:09.031344 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.032480 env[1204]: time="2025-09-06T00:23:09.032341373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdxpz,Uid:71c7fb54-c321-424e-9e8c-620356953bc4,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:09.036125 kubelet[1916]: I0906 00:23:09.036095 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5xz\" (UniqueName: \"kubernetes.io/projected/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-kube-api-access-8f5xz\") pod \"cilium-operator-6c4d7847fc-l4qrp\" (UID: \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\") " pod="kube-system/cilium-operator-6c4d7847fc-l4qrp" Sep 6 00:23:09.036309 kubelet[1916]: I0906 00:23:09.036291 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l4qrp\" (UID: \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\") " pod="kube-system/cilium-operator-6c4d7847fc-l4qrp" Sep 6 00:23:09.037402 kubelet[1916]: E0906 00:23:09.037377 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.038063 env[1204]: time="2025-09-06T00:23:09.038018976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jltsm,Uid:db610cc0-3792-43d7-9fce-f829377497ca,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:09.054642 update_engine[1191]: I0906 00:23:09.054539 1191 update_attempter.cc:509] Updating boot flags... Sep 6 00:23:09.246968 env[1204]: time="2025-09-06T00:23:09.246877940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:09.249595 env[1204]: time="2025-09-06T00:23:09.246991614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:09.249595 env[1204]: time="2025-09-06T00:23:09.247348688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:09.249595 env[1204]: time="2025-09-06T00:23:09.247499321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17 pid=2024 runtime=io.containerd.runc.v2 Sep 6 00:23:09.265141 env[1204]: time="2025-09-06T00:23:09.264428915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:09.265141 env[1204]: time="2025-09-06T00:23:09.264476905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:09.265141 env[1204]: time="2025-09-06T00:23:09.264488658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:09.265141 env[1204]: time="2025-09-06T00:23:09.264649441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1571501e822b2972d48f53c2581003480590887ca2c13d3ec7cdc114e09d0578 pid=2042 runtime=io.containerd.runc.v2 Sep 6 00:23:09.266885 systemd[1]: Started cri-containerd-8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17.scope. Sep 6 00:23:09.285024 systemd[1]: Started cri-containerd-1571501e822b2972d48f53c2581003480590887ca2c13d3ec7cdc114e09d0578.scope. Sep 6 00:23:09.306873 env[1204]: time="2025-09-06T00:23:09.305935177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jltsm,Uid:db610cc0-3792-43d7-9fce-f829377497ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\"" Sep 6 00:23:09.307053 kubelet[1916]: E0906 00:23:09.306616 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.307947 env[1204]: time="2025-09-06T00:23:09.307886766Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:23:09.314641 env[1204]: time="2025-09-06T00:23:09.313975554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdxpz,Uid:71c7fb54-c321-424e-9e8c-620356953bc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1571501e822b2972d48f53c2581003480590887ca2c13d3ec7cdc114e09d0578\"" Sep 6 00:23:09.314778 kubelet[1916]: E0906 00:23:09.314395 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.316164 env[1204]: time="2025-09-06T00:23:09.316134924Z" level=info msg="CreateContainer within sandbox \"1571501e822b2972d48f53c2581003480590887ca2c13d3ec7cdc114e09d0578\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:23:09.335310 kubelet[1916]: E0906 00:23:09.335262 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.335618 env[1204]: time="2025-09-06T00:23:09.335569561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l4qrp,Uid:96a05f9b-ea4d-4afc-ae6b-20fca30dad74,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:09.373287 env[1204]: time="2025-09-06T00:23:09.373215967Z" level=info msg="CreateContainer within sandbox \"1571501e822b2972d48f53c2581003480590887ca2c13d3ec7cdc114e09d0578\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70681d632c4df3cdc3d445eeed3b27faf10f4a3c15c9b6c6328e78978e9b32f3\"" Sep 6 00:23:09.375014 env[1204]: time="2025-09-06T00:23:09.373958165Z" level=info msg="StartContainer for \"70681d632c4df3cdc3d445eeed3b27faf10f4a3c15c9b6c6328e78978e9b32f3\"" Sep 6 00:23:09.380247 env[1204]: time="2025-09-06T00:23:09.380142344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:09.380247 env[1204]: time="2025-09-06T00:23:09.380185455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:09.380247 env[1204]: time="2025-09-06T00:23:09.380195494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:09.380499 env[1204]: time="2025-09-06T00:23:09.380351699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3 pid=2110 runtime=io.containerd.runc.v2 Sep 6 00:23:09.390089 systemd[1]: Started cri-containerd-70681d632c4df3cdc3d445eeed3b27faf10f4a3c15c9b6c6328e78978e9b32f3.scope. Sep 6 00:23:09.405988 systemd[1]: Started cri-containerd-606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3.scope. Sep 6 00:23:09.425049 env[1204]: time="2025-09-06T00:23:09.425000805Z" level=info msg="StartContainer for \"70681d632c4df3cdc3d445eeed3b27faf10f4a3c15c9b6c6328e78978e9b32f3\" returns successfully" Sep 6 00:23:09.443813 env[1204]: time="2025-09-06T00:23:09.443752433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l4qrp,Uid:96a05f9b-ea4d-4afc-ae6b-20fca30dad74,Namespace:kube-system,Attempt:0,} returns sandbox id \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\"" Sep 6 00:23:09.444466 kubelet[1916]: E0906 00:23:09.444426 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.934344 kubelet[1916]: E0906 00:23:09.934300 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:09.943981 kubelet[1916]: I0906 00:23:09.943858 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdxpz" podStartSLOduration=1.943838814 podStartE2EDuration="1.943838814s" podCreationTimestamp="2025-09-06 00:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:09.943538678 +0000 UTC m=+6.117748472" watchObservedRunningTime="2025-09-06 00:23:09.943838814 +0000 UTC m=+6.118048608" Sep 6 00:23:12.817348 kubelet[1916]: E0906 00:23:12.817286 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:12.938970 kubelet[1916]: E0906 00:23:12.938890 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:13.456096 kubelet[1916]: E0906 00:23:13.456019 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:13.952166 kubelet[1916]: E0906 00:23:13.949520 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:14.948532 kubelet[1916]: E0906 00:23:14.948488 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:15.807218 kubelet[1916]: E0906 00:23:15.807144 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:15.949255 kubelet[1916]: E0906 00:23:15.949201 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:18.598918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520942208.mount: Deactivated successfully. Sep 6 00:23:22.512758 env[1204]: time="2025-09-06T00:23:22.512672713Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.514633 env[1204]: time="2025-09-06T00:23:22.514606509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.516337 env[1204]: time="2025-09-06T00:23:22.516305345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:22.516810 env[1204]: time="2025-09-06T00:23:22.516777382Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:23:22.520085 env[1204]: time="2025-09-06T00:23:22.520062170Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:23:22.524854 env[1204]: time="2025-09-06T00:23:22.524813846Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:23:22.537226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064919156.mount: Deactivated successfully. Sep 6 00:23:22.538741 env[1204]: time="2025-09-06T00:23:22.538672191Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\"" Sep 6 00:23:22.539152 env[1204]: time="2025-09-06T00:23:22.539109423Z" level=info msg="StartContainer for \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\"" Sep 6 00:23:22.558253 systemd[1]: Started cri-containerd-8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624.scope. Sep 6 00:23:22.589526 systemd[1]: cri-containerd-8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624.scope: Deactivated successfully. Sep 6 00:23:22.787593 env[1204]: time="2025-09-06T00:23:22.786988658Z" level=info msg="StartContainer for \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\" returns successfully" Sep 6 00:23:22.810573 env[1204]: time="2025-09-06T00:23:22.810516016Z" level=info msg="shim disconnected" id=8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624 Sep 6 00:23:22.810573 env[1204]: time="2025-09-06T00:23:22.810574587Z" level=warning msg="cleaning up after shim disconnected" id=8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624 namespace=k8s.io Sep 6 00:23:22.810901 env[1204]: time="2025-09-06T00:23:22.810583664Z" level=info msg="cleaning up dead shim" Sep 6 00:23:22.816646 env[1204]: time="2025-09-06T00:23:22.816577026Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.106011 kubelet[1916]: E0906 00:23:23.105972 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:23.108339 env[1204]: time="2025-09-06T00:23:23.108280069Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:23:23.125738 env[1204]: time="2025-09-06T00:23:23.124543964Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\"" Sep 6 00:23:23.126399 env[1204]: time="2025-09-06T00:23:23.126252947Z" level=info msg="StartContainer for \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\"" Sep 6 00:23:23.140779 systemd[1]: Started cri-containerd-8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7.scope. Sep 6 00:23:23.164035 env[1204]: time="2025-09-06T00:23:23.163991066Z" level=info msg="StartContainer for \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\" returns successfully" Sep 6 00:23:23.171443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:23:23.171639 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:23:23.171826 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:23:23.173268 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:23:23.175890 systemd[1]: cri-containerd-8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7.scope: Deactivated successfully. Sep 6 00:23:23.183379 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:23:23.195424 env[1204]: time="2025-09-06T00:23:23.195384140Z" level=info msg="shim disconnected" id=8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7 Sep 6 00:23:23.195563 env[1204]: time="2025-09-06T00:23:23.195425529Z" level=warning msg="cleaning up after shim disconnected" id=8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7 namespace=k8s.io Sep 6 00:23:23.195563 env[1204]: time="2025-09-06T00:23:23.195434135Z" level=info msg="cleaning up dead shim" Sep 6 00:23:23.201384 env[1204]: time="2025-09-06T00:23:23.201358916Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2415 runtime=io.containerd.runc.v2\n" Sep 6 00:23:23.535814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624-rootfs.mount: Deactivated successfully. Sep 6 00:23:23.587718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727274988.mount: Deactivated successfully. Sep 6 00:23:24.107314 kubelet[1916]: E0906 00:23:24.107273 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:24.109267 env[1204]: time="2025-09-06T00:23:24.109226495Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:23:24.126128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249023321.mount: Deactivated successfully. Sep 6 00:23:24.129618 env[1204]: time="2025-09-06T00:23:24.129575588Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\"" Sep 6 00:23:24.130179 env[1204]: time="2025-09-06T00:23:24.130145770Z" level=info msg="StartContainer for \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\"" Sep 6 00:23:24.148867 systemd[1]: Started cri-containerd-a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588.scope. Sep 6 00:23:24.184105 systemd[1]: cri-containerd-a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588.scope: Deactivated successfully. Sep 6 00:23:24.311975 env[1204]: time="2025-09-06T00:23:24.311909436Z" level=info msg="StartContainer for \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\" returns successfully" Sep 6 00:23:24.371623 env[1204]: time="2025-09-06T00:23:24.371475190Z" level=info msg="shim disconnected" id=a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588 Sep 6 00:23:24.371623 env[1204]: time="2025-09-06T00:23:24.371539741Z" level=warning msg="cleaning up after shim disconnected" id=a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588 namespace=k8s.io Sep 6 00:23:24.371623 env[1204]: time="2025-09-06T00:23:24.371555170Z" level=info msg="cleaning up dead shim" Sep 6 00:23:24.380038 env[1204]: time="2025-09-06T00:23:24.380000470Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2473 runtime=io.containerd.runc.v2\n" Sep 6 00:23:24.394475 env[1204]: time="2025-09-06T00:23:24.394434251Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.396176 env[1204]: time="2025-09-06T00:23:24.396138605Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.397869 env[1204]: time="2025-09-06T00:23:24.397820858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:23:24.398254 env[1204]: time="2025-09-06T00:23:24.398214278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:23:24.403613 env[1204]: time="2025-09-06T00:23:24.403578875Z" level=info msg="CreateContainer within sandbox \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:23:24.414609 env[1204]: time="2025-09-06T00:23:24.414552359Z" level=info msg="CreateContainer within sandbox \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\"" Sep 6 00:23:24.415665 env[1204]: time="2025-09-06T00:23:24.415622752Z" level=info msg="StartContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\"" Sep 6 00:23:24.434799 systemd[1]: Started cri-containerd-0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24.scope. Sep 6 00:23:24.468348 env[1204]: time="2025-09-06T00:23:24.468299432Z" level=info msg="StartContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" returns successfully" Sep 6 00:23:25.110179 kubelet[1916]: E0906 00:23:25.110141 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:25.114421 kubelet[1916]: E0906 00:23:25.114395 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:25.119829 env[1204]: time="2025-09-06T00:23:25.119361143Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:23:25.137574 env[1204]: time="2025-09-06T00:23:25.137499696Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\"" Sep 6 00:23:25.138169 env[1204]: time="2025-09-06T00:23:25.138143767Z" level=info msg="StartContainer for \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\"" Sep 6 00:23:25.162080 systemd[1]: Started cri-containerd-dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842.scope. Sep 6 00:23:25.167961 kubelet[1916]: I0906 00:23:25.167889 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l4qrp" podStartSLOduration=1.213873664 podStartE2EDuration="16.167870277s" podCreationTimestamp="2025-09-06 00:23:09 +0000 UTC" firstStartedPulling="2025-09-06 00:23:09.445199281 +0000 UTC m=+5.619409075" lastFinishedPulling="2025-09-06 00:23:24.399195894 +0000 UTC m=+20.573405688" observedRunningTime="2025-09-06 00:23:25.141422462 +0000 UTC m=+21.315632246" watchObservedRunningTime="2025-09-06 00:23:25.167870277 +0000 UTC m=+21.342080061" Sep 6 00:23:25.196213 systemd[1]: cri-containerd-dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842.scope: Deactivated successfully. Sep 6 00:23:25.197819 env[1204]: time="2025-09-06T00:23:25.197779482Z" level=info msg="StartContainer for \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\" returns successfully" Sep 6 00:23:25.215615 env[1204]: time="2025-09-06T00:23:25.215548550Z" level=info msg="shim disconnected" id=dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842 Sep 6 00:23:25.215615 env[1204]: time="2025-09-06T00:23:25.215604555Z" level=warning msg="cleaning up after shim disconnected" id=dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842 namespace=k8s.io Sep 6 00:23:25.215615 env[1204]: time="2025-09-06T00:23:25.215613903Z" level=info msg="cleaning up dead shim" Sep 6 00:23:25.224619 env[1204]: time="2025-09-06T00:23:25.224572937Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:23:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" Sep 6 00:23:25.537077 systemd[1]: run-containerd-runc-k8s.io-dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842-runc.6XJ4Qb.mount: Deactivated successfully. Sep 6 00:23:25.537183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842-rootfs.mount: Deactivated successfully. Sep 6 00:23:26.122157 kubelet[1916]: E0906 00:23:26.122080 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:26.122704 kubelet[1916]: E0906 00:23:26.122408 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:26.124689 env[1204]: time="2025-09-06T00:23:26.124624630Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:23:26.145202 env[1204]: time="2025-09-06T00:23:26.145102438Z" level=info msg="CreateContainer within sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\"" Sep 6 00:23:26.147017 env[1204]: time="2025-09-06T00:23:26.146973045Z" level=info msg="StartContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\"" Sep 6 00:23:26.171647 systemd[1]: Started cri-containerd-6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505.scope. Sep 6 00:23:26.188968 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:39706.service. Sep 6 00:23:26.213481 env[1204]: time="2025-09-06T00:23:26.213425775Z" level=info msg="StartContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" returns successfully" Sep 6 00:23:26.252017 sshd[2605]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:26.256410 sshd[2605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:26.268158 systemd[1]: Started session-8.scope. Sep 6 00:23:26.268524 systemd-logind[1188]: New session 8 of user core. Sep 6 00:23:26.365853 kubelet[1916]: I0906 00:23:26.364864 1916 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:23:26.400460 systemd[1]: Created slice kubepods-burstable-pod53fed850_3ae1_452c_90f0_f8bdab3032d2.slice. Sep 6 00:23:26.407534 systemd[1]: Created slice kubepods-burstable-podadf9a97d_4d25_4e85_b2e9_c8b8318d2299.slice. Sep 6 00:23:26.454408 sshd[2605]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:26.457507 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:39706.service: Deactivated successfully. Sep 6 00:23:26.458471 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:23:26.459662 systemd-logind[1188]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:23:26.460948 systemd-logind[1188]: Removed session 8. Sep 6 00:23:26.551842 kubelet[1916]: I0906 00:23:26.551779 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmldp\" (UniqueName: \"kubernetes.io/projected/adf9a97d-4d25-4e85-b2e9-c8b8318d2299-kube-api-access-lmldp\") pod \"coredns-668d6bf9bc-d4rf9\" (UID: \"adf9a97d-4d25-4e85-b2e9-c8b8318d2299\") " pod="kube-system/coredns-668d6bf9bc-d4rf9" Sep 6 00:23:26.552076 kubelet[1916]: I0906 00:23:26.551911 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf9a97d-4d25-4e85-b2e9-c8b8318d2299-config-volume\") pod \"coredns-668d6bf9bc-d4rf9\" (UID: \"adf9a97d-4d25-4e85-b2e9-c8b8318d2299\") " pod="kube-system/coredns-668d6bf9bc-d4rf9" Sep 6 00:23:26.552076 kubelet[1916]: I0906 00:23:26.551986 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bc9\" (UniqueName: \"kubernetes.io/projected/53fed850-3ae1-452c-90f0-f8bdab3032d2-kube-api-access-w2bc9\") pod \"coredns-668d6bf9bc-xrzhk\" (UID: \"53fed850-3ae1-452c-90f0-f8bdab3032d2\") " pod="kube-system/coredns-668d6bf9bc-xrzhk" Sep 6 00:23:26.552076 kubelet[1916]: I0906 00:23:26.552033 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53fed850-3ae1-452c-90f0-f8bdab3032d2-config-volume\") pod \"coredns-668d6bf9bc-xrzhk\" (UID: \"53fed850-3ae1-452c-90f0-f8bdab3032d2\") " pod="kube-system/coredns-668d6bf9bc-xrzhk" Sep 6 00:23:26.708003 kubelet[1916]: E0906 00:23:26.707826 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:26.708890 env[1204]: time="2025-09-06T00:23:26.708825847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrzhk,Uid:53fed850-3ae1-452c-90f0-f8bdab3032d2,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:26.712019 kubelet[1916]: E0906 00:23:26.711955 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:26.712573 env[1204]: time="2025-09-06T00:23:26.712522818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4rf9,Uid:adf9a97d-4d25-4e85-b2e9-c8b8318d2299,Namespace:kube-system,Attempt:0,}" Sep 6 00:23:27.129027 kubelet[1916]: E0906 00:23:27.128985 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:27.145086 kubelet[1916]: I0906 00:23:27.144996 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jltsm" podStartSLOduration=5.932459393 podStartE2EDuration="19.144967894s" podCreationTimestamp="2025-09-06 00:23:08 +0000 UTC" firstStartedPulling="2025-09-06 00:23:09.307445405 +0000 UTC m=+5.481655189" lastFinishedPulling="2025-09-06 00:23:22.519953896 +0000 UTC m=+18.694163690" observedRunningTime="2025-09-06 00:23:27.14408817 +0000 UTC m=+23.318297964" watchObservedRunningTime="2025-09-06 00:23:27.144967894 +0000 UTC m=+23.319177688" Sep 6 00:23:28.130754 kubelet[1916]: E0906 00:23:28.130697 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:28.256874 systemd-networkd[1025]: cilium_host: Link UP Sep 6 00:23:28.257059 systemd-networkd[1025]: cilium_net: Link UP Sep 6 00:23:28.257064 systemd-networkd[1025]: cilium_net: Gained carrier Sep 6 00:23:28.257953 systemd-networkd[1025]: cilium_host: Gained carrier Sep 6 00:23:28.259762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:23:28.259534 systemd-networkd[1025]: cilium_host: Gained IPv6LL Sep 6 00:23:28.342208 systemd-networkd[1025]: cilium_vxlan: Link UP Sep 6 00:23:28.342216 systemd-networkd[1025]: cilium_vxlan: Gained carrier Sep 6 00:23:28.536767 kernel: NET: Registered PF_ALG protocol family Sep 6 00:23:29.032843 systemd-networkd[1025]: cilium_net: Gained IPv6LL Sep 6 00:23:29.097081 systemd-networkd[1025]: lxc_health: Link UP Sep 6 00:23:29.104686 systemd-networkd[1025]: lxc_health: Gained carrier Sep 6 00:23:29.104862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:23:29.132492 kubelet[1916]: E0906 00:23:29.132452 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:29.287163 systemd-networkd[1025]: lxc0a68880c37d1: Link UP Sep 6 00:23:29.292873 kernel: eth0: renamed from tmp3c809 Sep 6 00:23:29.306763 kernel: eth0: renamed from tmp63b9b Sep 6 00:23:29.315196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0a68880c37d1: link becomes ready Sep 6 00:23:29.314300 systemd-networkd[1025]: lxcb6e134ff320f: Link UP Sep 6 00:23:29.315065 systemd-networkd[1025]: lxc0a68880c37d1: Gained carrier Sep 6 00:23:29.317752 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:23:29.317885 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb6e134ff320f: link becomes ready Sep 6 00:23:29.318024 systemd-networkd[1025]: lxcb6e134ff320f: Gained carrier Sep 6 00:23:29.673179 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL Sep 6 00:23:30.632935 systemd-networkd[1025]: lxc0a68880c37d1: Gained IPv6LL Sep 6 00:23:31.039521 kubelet[1916]: E0906 00:23:31.039382 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:31.080934 systemd-networkd[1025]: lxc_health: Gained IPv6LL Sep 6 00:23:31.136317 kubelet[1916]: E0906 00:23:31.136290 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:31.336885 systemd-networkd[1025]: lxcb6e134ff320f: Gained IPv6LL Sep 6 00:23:31.458365 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:49434.service. Sep 6 00:23:31.499614 sshd[3153]: Accepted publickey for core from 10.0.0.1 port 49434 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:31.501001 sshd[3153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:31.504797 systemd-logind[1188]: New session 9 of user core. Sep 6 00:23:31.505571 systemd[1]: Started session-9.scope. Sep 6 00:23:31.636099 sshd[3153]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:31.638942 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:49434.service: Deactivated successfully. Sep 6 00:23:31.639666 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:23:31.640568 systemd-logind[1188]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:23:31.641487 systemd-logind[1188]: Removed session 9. Sep 6 00:23:32.137817 kubelet[1916]: E0906 00:23:32.137774 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:32.624442 env[1204]: time="2025-09-06T00:23:32.624335084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:32.624965 env[1204]: time="2025-09-06T00:23:32.624409093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:32.624965 env[1204]: time="2025-09-06T00:23:32.624419783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:32.624965 env[1204]: time="2025-09-06T00:23:32.624648001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63b9b8607d0859269477bb20bbca38adeec1bb808412a4a04aa814528fb5657c pid=3184 runtime=io.containerd.runc.v2 Sep 6 00:23:32.625363 env[1204]: time="2025-09-06T00:23:32.625309585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:23:32.625419 env[1204]: time="2025-09-06T00:23:32.625383704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:23:32.625419 env[1204]: time="2025-09-06T00:23:32.625407960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:23:32.625663 env[1204]: time="2025-09-06T00:23:32.625611552Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c809f581a35e4e7345a76f8009425efc49ba0a4a9f7e83a361bcf15bc65dc19 pid=3193 runtime=io.containerd.runc.v2 Sep 6 00:23:32.642047 systemd[1]: Started cri-containerd-63b9b8607d0859269477bb20bbca38adeec1bb808412a4a04aa814528fb5657c.scope. Sep 6 00:23:32.646599 systemd[1]: Started cri-containerd-3c809f581a35e4e7345a76f8009425efc49ba0a4a9f7e83a361bcf15bc65dc19.scope. Sep 6 00:23:32.655668 systemd-resolved[1140]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:23:32.657486 systemd-resolved[1140]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:23:32.681518 env[1204]: time="2025-09-06T00:23:32.681436707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xrzhk,Uid:53fed850-3ae1-452c-90f0-f8bdab3032d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c809f581a35e4e7345a76f8009425efc49ba0a4a9f7e83a361bcf15bc65dc19\"" Sep 6 00:23:32.682084 kubelet[1916]: E0906 00:23:32.682051 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:32.683799 env[1204]: time="2025-09-06T00:23:32.683765263Z" level=info msg="CreateContainer within sandbox \"3c809f581a35e4e7345a76f8009425efc49ba0a4a9f7e83a361bcf15bc65dc19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:23:32.685894 env[1204]: time="2025-09-06T00:23:32.685857324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4rf9,Uid:adf9a97d-4d25-4e85-b2e9-c8b8318d2299,Namespace:kube-system,Attempt:0,} returns sandbox id \"63b9b8607d0859269477bb20bbca38adeec1bb808412a4a04aa814528fb5657c\"" Sep 6 00:23:32.686750 kubelet[1916]: E0906 00:23:32.686709 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:32.688569 env[1204]: time="2025-09-06T00:23:32.688542590Z" level=info msg="CreateContainer within sandbox \"63b9b8607d0859269477bb20bbca38adeec1bb808412a4a04aa814528fb5657c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:23:33.443003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711705675.mount: Deactivated successfully. Sep 6 00:23:33.487680 env[1204]: time="2025-09-06T00:23:33.487587052Z" level=info msg="CreateContainer within sandbox \"3c809f581a35e4e7345a76f8009425efc49ba0a4a9f7e83a361bcf15bc65dc19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a694ac989c9709b64ba25d345c4e1f61817d8c232ff883ad820b76c0cef54f3\"" Sep 6 00:23:33.488284 env[1204]: time="2025-09-06T00:23:33.488232965Z" level=info msg="StartContainer for \"1a694ac989c9709b64ba25d345c4e1f61817d8c232ff883ad820b76c0cef54f3\"" Sep 6 00:23:33.493203 env[1204]: time="2025-09-06T00:23:33.493166035Z" level=info msg="CreateContainer within sandbox \"63b9b8607d0859269477bb20bbca38adeec1bb808412a4a04aa814528fb5657c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d6e6a3c2af60f622129784e9d199713c8aa38674156f610ef343357b81ba6ab\"" Sep 6 00:23:33.493642 env[1204]: time="2025-09-06T00:23:33.493612123Z" level=info msg="StartContainer for \"6d6e6a3c2af60f622129784e9d199713c8aa38674156f610ef343357b81ba6ab\"" Sep 6 00:23:33.505361 systemd[1]: Started cri-containerd-1a694ac989c9709b64ba25d345c4e1f61817d8c232ff883ad820b76c0cef54f3.scope. Sep 6 00:23:33.516351 systemd[1]: Started cri-containerd-6d6e6a3c2af60f622129784e9d199713c8aa38674156f610ef343357b81ba6ab.scope. Sep 6 00:23:33.673990 env[1204]: time="2025-09-06T00:23:33.673932144Z" level=info msg="StartContainer for \"1a694ac989c9709b64ba25d345c4e1f61817d8c232ff883ad820b76c0cef54f3\" returns successfully" Sep 6 00:23:33.713325 env[1204]: time="2025-09-06T00:23:33.713212243Z" level=info msg="StartContainer for \"6d6e6a3c2af60f622129784e9d199713c8aa38674156f610ef343357b81ba6ab\" returns successfully" Sep 6 00:23:34.143791 kubelet[1916]: E0906 00:23:34.143759 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:34.145541 kubelet[1916]: E0906 00:23:34.145502 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:34.213921 kubelet[1916]: I0906 00:23:34.213767 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d4rf9" podStartSLOduration=25.213745601 podStartE2EDuration="25.213745601s" podCreationTimestamp="2025-09-06 00:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:34.211918619 +0000 UTC m=+30.386128413" watchObservedRunningTime="2025-09-06 00:23:34.213745601 +0000 UTC m=+30.387955396" Sep 6 00:23:34.223044 kubelet[1916]: I0906 00:23:34.222978 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xrzhk" podStartSLOduration=25.222957199 podStartE2EDuration="25.222957199s" podCreationTimestamp="2025-09-06 00:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:23:34.222442703 +0000 UTC m=+30.396652497" watchObservedRunningTime="2025-09-06 00:23:34.222957199 +0000 UTC m=+30.397166994" Sep 6 00:23:35.147847 kubelet[1916]: E0906 00:23:35.147813 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:35.147847 kubelet[1916]: E0906 00:23:35.147832 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:36.148617 kubelet[1916]: E0906 00:23:36.148583 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:36.149123 kubelet[1916]: E0906 00:23:36.148667 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:23:36.640182 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:49440.service. Sep 6 00:23:36.676746 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 49440 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:36.677778 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:36.681103 systemd-logind[1188]: New session 10 of user core. Sep 6 00:23:36.681914 systemd[1]: Started session-10.scope. Sep 6 00:23:36.907228 sshd[3344]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:36.909654 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:49440.service: Deactivated successfully. Sep 6 00:23:36.910385 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:23:36.910972 systemd-logind[1188]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:23:36.911653 systemd-logind[1188]: Removed session 10. Sep 6 00:23:41.912270 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:55810.service. Sep 6 00:23:41.953946 sshd[3364]: Accepted publickey for core from 10.0.0.1 port 55810 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:41.955192 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:41.958932 systemd-logind[1188]: New session 11 of user core. Sep 6 00:23:41.959802 systemd[1]: Started session-11.scope. Sep 6 00:23:42.083973 sshd[3364]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:42.087715 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:55810.service: Deactivated successfully. Sep 6 00:23:42.088445 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:23:42.089398 systemd-logind[1188]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:23:42.090797 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:55826.service. Sep 6 00:23:42.091896 systemd-logind[1188]: Removed session 11. Sep 6 00:23:42.132809 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:42.133978 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:42.137113 systemd-logind[1188]: New session 12 of user core. Sep 6 00:23:42.137962 systemd[1]: Started session-12.scope. Sep 6 00:23:42.288229 sshd[3378]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:42.292278 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:55826.service: Deactivated successfully. Sep 6 00:23:42.292850 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:23:42.293581 systemd-logind[1188]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:23:42.295553 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:55836.service. Sep 6 00:23:42.297755 systemd-logind[1188]: Removed session 12. Sep 6 00:23:42.339933 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 55836 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:42.341388 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:42.346007 systemd-logind[1188]: New session 13 of user core. Sep 6 00:23:42.347354 systemd[1]: Started session-13.scope. Sep 6 00:23:42.465974 sshd[3390]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:42.468925 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:55836.service: Deactivated successfully. Sep 6 00:23:42.469914 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:23:42.470478 systemd-logind[1188]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:23:42.471288 systemd-logind[1188]: Removed session 13. Sep 6 00:23:47.471635 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:55850.service. Sep 6 00:23:47.509518 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 55850 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:47.511021 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:47.514802 systemd-logind[1188]: New session 14 of user core. Sep 6 00:23:47.515682 systemd[1]: Started session-14.scope. Sep 6 00:23:47.623357 sshd[3406]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:47.625948 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:55850.service: Deactivated successfully. Sep 6 00:23:47.626973 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:23:47.627621 systemd-logind[1188]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:23:47.628330 systemd-logind[1188]: Removed session 14. Sep 6 00:23:52.627825 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:56566.service. Sep 6 00:23:52.663746 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 56566 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:52.664862 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:52.668399 systemd-logind[1188]: New session 15 of user core. Sep 6 00:23:52.669419 systemd[1]: Started session-15.scope. Sep 6 00:23:52.772263 sshd[3419]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:52.774215 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:56566.service: Deactivated successfully. Sep 6 00:23:52.774876 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:23:52.775429 systemd-logind[1188]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:23:52.776100 systemd-logind[1188]: Removed session 15. Sep 6 00:23:57.777259 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:56574.service. Sep 6 00:23:57.814102 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:57.815357 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:57.818883 systemd-logind[1188]: New session 16 of user core. Sep 6 00:23:57.819659 systemd[1]: Started session-16.scope. Sep 6 00:23:57.925964 sshd[3434]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:57.929028 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:56574.service: Deactivated successfully. Sep 6 00:23:57.929563 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:23:57.930496 systemd-logind[1188]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:23:57.931706 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:56584.service. Sep 6 00:23:57.933146 systemd-logind[1188]: Removed session 16. Sep 6 00:23:57.968897 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 56584 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:57.970152 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:57.973876 systemd-logind[1188]: New session 17 of user core. Sep 6 00:23:57.974695 systemd[1]: Started session-17.scope. Sep 6 00:23:58.280859 sshd[3447]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:58.284439 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:56584.service: Deactivated successfully. Sep 6 00:23:58.285256 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:23:58.285890 systemd-logind[1188]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:23:58.287503 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:56592.service. Sep 6 00:23:58.289457 systemd-logind[1188]: Removed session 17. Sep 6 00:23:58.328615 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 56592 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:58.330142 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:58.334710 systemd-logind[1188]: New session 18 of user core. Sep 6 00:23:58.335761 systemd[1]: Started session-18.scope. Sep 6 00:23:58.791252 sshd[3459]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:58.794311 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:56592.service: Deactivated successfully. Sep 6 00:23:58.795336 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:23:58.797810 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:56602.service. Sep 6 00:23:58.798835 systemd-logind[1188]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:23:58.802878 systemd-logind[1188]: Removed session 18. Sep 6 00:23:58.841290 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 56602 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:58.842556 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:58.846215 systemd-logind[1188]: New session 19 of user core. Sep 6 00:23:58.847330 systemd[1]: Started session-19.scope. Sep 6 00:23:59.103202 sshd[3477]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:59.107258 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:56606.service. Sep 6 00:23:59.112206 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:56602.service: Deactivated successfully. Sep 6 00:23:59.113255 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:23:59.114071 systemd-logind[1188]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:23:59.115097 systemd-logind[1188]: Removed session 19. Sep 6 00:23:59.145906 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 56606 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:23:59.147189 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:23:59.151090 systemd-logind[1188]: New session 20 of user core. Sep 6 00:23:59.152028 systemd[1]: Started session-20.scope. Sep 6 00:23:59.267457 sshd[3489]: pam_unix(sshd:session): session closed for user core Sep 6 00:23:59.270441 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:56606.service: Deactivated successfully. Sep 6 00:23:59.271378 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:23:59.272016 systemd-logind[1188]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:23:59.272758 systemd-logind[1188]: Removed session 20. Sep 6 00:24:04.271906 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:44086.service. Sep 6 00:24:04.311026 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 44086 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:04.312380 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:04.316203 systemd-logind[1188]: New session 21 of user core. Sep 6 00:24:04.317103 systemd[1]: Started session-21.scope. Sep 6 00:24:04.422442 sshd[3506]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:04.425422 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:44086.service: Deactivated successfully. Sep 6 00:24:04.426309 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:24:04.427108 systemd-logind[1188]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:24:04.427879 systemd-logind[1188]: Removed session 21. Sep 6 00:24:09.426831 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:44092.service. Sep 6 00:24:09.463201 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 44092 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:09.464301 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:09.467449 systemd-logind[1188]: New session 22 of user core. Sep 6 00:24:09.468284 systemd[1]: Started session-22.scope. Sep 6 00:24:09.567152 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:09.569404 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:44092.service: Deactivated successfully. Sep 6 00:24:09.570066 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:24:09.570770 systemd-logind[1188]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:24:09.571394 systemd-logind[1188]: Removed session 22. Sep 6 00:24:14.571484 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:37236.service. Sep 6 00:24:14.608164 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:14.609273 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:14.612235 systemd-logind[1188]: New session 23 of user core. Sep 6 00:24:14.613011 systemd[1]: Started session-23.scope. Sep 6 00:24:14.716903 sshd[3536]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:14.718868 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:37236.service: Deactivated successfully. Sep 6 00:24:14.719520 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:24:14.719957 systemd-logind[1188]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:24:14.720588 systemd-logind[1188]: Removed session 23. Sep 6 00:24:18.915017 kubelet[1916]: E0906 00:24:18.914964 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:19.722089 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:37246.service. Sep 6 00:24:19.760957 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 37246 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:19.762076 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:19.765423 systemd-logind[1188]: New session 24 of user core. Sep 6 00:24:19.766291 systemd[1]: Started session-24.scope. Sep 6 00:24:19.868955 sshd[3549]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:19.871904 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:37246.service: Deactivated successfully. Sep 6 00:24:19.872432 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:24:19.872973 systemd-logind[1188]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:24:19.874414 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:37258.service. Sep 6 00:24:19.875234 systemd-logind[1188]: Removed session 24. Sep 6 00:24:19.912920 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 37258 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:19.914168 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:19.917960 systemd-logind[1188]: New session 25 of user core. Sep 6 00:24:19.918810 systemd[1]: Started session-25.scope. Sep 6 00:24:21.242685 env[1204]: time="2025-09-06T00:24:21.242600291Z" level=info msg="StopContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" with timeout 30 (s)" Sep 6 00:24:21.243241 env[1204]: time="2025-09-06T00:24:21.243087224Z" level=info msg="Stop container \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" with signal terminated" Sep 6 00:24:21.254377 systemd[1]: run-containerd-runc-k8s.io-6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505-runc.3ANzTh.mount: Deactivated successfully. Sep 6 00:24:21.257346 systemd[1]: cri-containerd-0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24.scope: Deactivated successfully. Sep 6 00:24:21.272492 env[1204]: time="2025-09-06T00:24:21.272416884Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:24:21.274625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24-rootfs.mount: Deactivated successfully. Sep 6 00:24:21.280459 env[1204]: time="2025-09-06T00:24:21.280428142Z" level=info msg="StopContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" with timeout 2 (s)" Sep 6 00:24:21.282363 env[1204]: time="2025-09-06T00:24:21.282302252Z" level=info msg="Stop container \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" with signal terminated" Sep 6 00:24:21.284828 env[1204]: time="2025-09-06T00:24:21.284762105Z" level=info msg="shim disconnected" id=0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24 Sep 6 00:24:21.284828 env[1204]: time="2025-09-06T00:24:21.284816239Z" level=warning msg="cleaning up after shim disconnected" id=0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24 namespace=k8s.io Sep 6 00:24:21.284828 env[1204]: time="2025-09-06T00:24:21.284825096Z" level=info msg="cleaning up dead shim" Sep 6 00:24:21.290318 systemd-networkd[1025]: lxc_health: Link DOWN Sep 6 00:24:21.290326 systemd-networkd[1025]: lxc_health: Lost carrier Sep 6 00:24:21.292252 env[1204]: time="2025-09-06T00:24:21.292187581Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3614 runtime=io.containerd.runc.v2\n" Sep 6 00:24:21.295057 env[1204]: time="2025-09-06T00:24:21.295027792Z" level=info msg="StopContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" returns successfully" Sep 6 00:24:21.295852 env[1204]: time="2025-09-06T00:24:21.295814479Z" level=info msg="StopPodSandbox for \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\"" Sep 6 00:24:21.295928 env[1204]: time="2025-09-06T00:24:21.295907368Z" level=info msg="Container to stop \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.297814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3-shm.mount: Deactivated successfully. Sep 6 00:24:21.309641 systemd[1]: cri-containerd-606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3.scope: Deactivated successfully. Sep 6 00:24:21.327522 systemd[1]: cri-containerd-6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505.scope: Deactivated successfully. Sep 6 00:24:21.327836 systemd[1]: cri-containerd-6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505.scope: Consumed 6.061s CPU time. Sep 6 00:24:21.329692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3-rootfs.mount: Deactivated successfully. Sep 6 00:24:21.339180 env[1204]: time="2025-09-06T00:24:21.339110035Z" level=info msg="shim disconnected" id=606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3 Sep 6 00:24:21.339180 env[1204]: time="2025-09-06T00:24:21.339160551Z" level=warning msg="cleaning up after shim disconnected" id=606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3 namespace=k8s.io Sep 6 00:24:21.339180 env[1204]: time="2025-09-06T00:24:21.339169860Z" level=info msg="cleaning up dead shim" Sep 6 00:24:21.346252 env[1204]: time="2025-09-06T00:24:21.346204296Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n" Sep 6 00:24:21.346805 env[1204]: time="2025-09-06T00:24:21.346779077Z" level=info msg="TearDown network for sandbox \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\" successfully" Sep 6 00:24:21.346895 env[1204]: time="2025-09-06T00:24:21.346873769Z" level=info msg="StopPodSandbox for \"606fe779124c24f911c8b923b4638bdd699d2f1b38a0cdedece5047553c829c3\" returns successfully" Sep 6 00:24:21.351311 env[1204]: time="2025-09-06T00:24:21.351265292Z" level=info msg="shim disconnected" id=6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505 Sep 6 00:24:21.351415 env[1204]: time="2025-09-06T00:24:21.351309467Z" level=warning msg="cleaning up after shim disconnected" id=6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505 namespace=k8s.io Sep 6 00:24:21.351415 env[1204]: time="2025-09-06T00:24:21.351410921Z" level=info msg="cleaning up dead shim" Sep 6 00:24:21.361056 kubelet[1916]: I0906 00:24:21.361023 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-cilium-config-path\") pod \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\" (UID: \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\") " Sep 6 00:24:21.361347 kubelet[1916]: I0906 00:24:21.361069 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5xz\" (UniqueName: \"kubernetes.io/projected/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-kube-api-access-8f5xz\") pod \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\" (UID: \"96a05f9b-ea4d-4afc-ae6b-20fca30dad74\") " Sep 6 00:24:21.361381 env[1204]: time="2025-09-06T00:24:21.361275048Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n" Sep 6 00:24:21.364059 kubelet[1916]: I0906 00:24:21.363997 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96a05f9b-ea4d-4afc-ae6b-20fca30dad74" (UID: "96a05f9b-ea4d-4afc-ae6b-20fca30dad74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:24:21.364209 kubelet[1916]: I0906 00:24:21.364187 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-kube-api-access-8f5xz" (OuterVolumeSpecName: "kube-api-access-8f5xz") pod "96a05f9b-ea4d-4afc-ae6b-20fca30dad74" (UID: "96a05f9b-ea4d-4afc-ae6b-20fca30dad74"). InnerVolumeSpecName "kube-api-access-8f5xz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:24:21.364484 env[1204]: time="2025-09-06T00:24:21.364448028Z" level=info msg="StopContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" returns successfully" Sep 6 00:24:21.364969 env[1204]: time="2025-09-06T00:24:21.364915423Z" level=info msg="StopPodSandbox for \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\"" Sep 6 00:24:21.364969 env[1204]: time="2025-09-06T00:24:21.364977431Z" level=info msg="Container to stop \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.365184 env[1204]: time="2025-09-06T00:24:21.364991950Z" level=info msg="Container to stop \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.365184 env[1204]: time="2025-09-06T00:24:21.365002640Z" level=info msg="Container to stop \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.365184 env[1204]: time="2025-09-06T00:24:21.365012329Z" level=info msg="Container to stop \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.365184 env[1204]: time="2025-09-06T00:24:21.365021837Z" level=info msg="Container to stop \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:21.370032 systemd[1]: cri-containerd-8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17.scope: Deactivated successfully. Sep 6 00:24:21.389835 env[1204]: time="2025-09-06T00:24:21.389784608Z" level=info msg="shim disconnected" id=8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17 Sep 6 00:24:21.390020 env[1204]: time="2025-09-06T00:24:21.389836217Z" level=warning msg="cleaning up after shim disconnected" id=8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17 namespace=k8s.io Sep 6 00:24:21.390020 env[1204]: time="2025-09-06T00:24:21.389845764Z" level=info msg="cleaning up dead shim" Sep 6 00:24:21.395941 env[1204]: time="2025-09-06T00:24:21.395897889Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3705 runtime=io.containerd.runc.v2\n" Sep 6 00:24:21.396342 env[1204]: time="2025-09-06T00:24:21.396312353Z" level=info msg="TearDown network for sandbox \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" successfully" Sep 6 00:24:21.396342 env[1204]: time="2025-09-06T00:24:21.396337772Z" level=info msg="StopPodSandbox for \"8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17\" returns successfully" Sep 6 00:24:21.461311 kubelet[1916]: I0906 00:24:21.461274 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8f5xz\" (UniqueName: \"kubernetes.io/projected/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-kube-api-access-8f5xz\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.461311 kubelet[1916]: I0906 00:24:21.461300 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96a05f9b-ea4d-4afc-ae6b-20fca30dad74-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.561743 kubelet[1916]: I0906 00:24:21.561678 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.561743 kubelet[1916]: I0906 00:24:21.561677 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-kernel\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561839 kubelet[1916]: I0906 00:24:21.561795 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-lib-modules\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561839 kubelet[1916]: I0906 00:24:21.561802 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.561839 kubelet[1916]: I0906 00:24:21.561830 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db610cc0-3792-43d7-9fce-f829377497ca-clustermesh-secrets\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561919 kubelet[1916]: I0906 00:24:21.561850 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db610cc0-3792-43d7-9fce-f829377497ca-cilium-config-path\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561919 kubelet[1916]: I0906 00:24:21.561871 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cni-path\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561919 kubelet[1916]: I0906 00:24:21.561887 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-run\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561919 kubelet[1916]: I0906 00:24:21.561900 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-net\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.561919 kubelet[1916]: I0906 00:24:21.561916 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47gr5\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-kube-api-access-47gr5\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561930 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-hostproc\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561948 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-cgroup\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561944 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561963 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-hubble-tls\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561979 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-etc-cni-netd\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562039 kubelet[1916]: I0906 00:24:21.561991 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-bpf-maps\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562008 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-xtables-lock\") pod \"db610cc0-3792-43d7-9fce-f829377497ca\" (UID: \"db610cc0-3792-43d7-9fce-f829377497ca\") " Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562040 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562051 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562060 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562079 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.562179 kubelet[1916]: I0906 00:24:21.562097 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.562341 kubelet[1916]: I0906 00:24:21.562110 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.564330 kubelet[1916]: I0906 00:24:21.562558 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.564330 kubelet[1916]: I0906 00:24:21.562589 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.564330 kubelet[1916]: I0906 00:24:21.562696 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.564330 kubelet[1916]: I0906 00:24:21.562757 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:21.564330 kubelet[1916]: I0906 00:24:21.564306 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db610cc0-3792-43d7-9fce-f829377497ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:24:21.564763 kubelet[1916]: I0906 00:24:21.564713 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db610cc0-3792-43d7-9fce-f829377497ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:24:21.565111 kubelet[1916]: I0906 00:24:21.565078 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:24:21.565413 kubelet[1916]: I0906 00:24:21.565390 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-kube-api-access-47gr5" (OuterVolumeSpecName: "kube-api-access-47gr5") pod "db610cc0-3792-43d7-9fce-f829377497ca" (UID: "db610cc0-3792-43d7-9fce-f829377497ca"). InnerVolumeSpecName "kube-api-access-47gr5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:24:21.662837 kubelet[1916]: I0906 00:24:21.662806 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662837 kubelet[1916]: I0906 00:24:21.662830 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662837 kubelet[1916]: I0906 00:24:21.662839 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662848 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662857 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662864 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db610cc0-3792-43d7-9fce-f829377497ca-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662874 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db610cc0-3792-43d7-9fce-f829377497ca-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662883 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662891 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662898 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47gr5\" (UniqueName: \"kubernetes.io/projected/db610cc0-3792-43d7-9fce-f829377497ca-kube-api-access-47gr5\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.662978 kubelet[1916]: I0906 00:24:21.662906 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db610cc0-3792-43d7-9fce-f829377497ca-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:21.920177 systemd[1]: Removed slice kubepods-besteffort-pod96a05f9b_ea4d_4afc_ae6b_20fca30dad74.slice. Sep 6 00:24:21.921692 systemd[1]: Removed slice kubepods-burstable-poddb610cc0_3792_43d7_9fce_f829377497ca.slice. Sep 6 00:24:21.921795 systemd[1]: kubepods-burstable-poddb610cc0_3792_43d7_9fce_f829377497ca.slice: Consumed 6.162s CPU time. Sep 6 00:24:22.248467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505-rootfs.mount: Deactivated successfully. Sep 6 00:24:22.248574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17-rootfs.mount: Deactivated successfully. Sep 6 00:24:22.249194 kubelet[1916]: I0906 00:24:22.249048 1916 scope.go:117] "RemoveContainer" containerID="0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24" Sep 6 00:24:22.248635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d9298e41e2703e1e3bd1bdf3fa9024f9be16c8e6522161868f2b55edeef5b17-shm.mount: Deactivated successfully. Sep 6 00:24:22.248695 systemd[1]: var-lib-kubelet-pods-96a05f9b\x2dea4d\x2d4afc\x2dae6b\x2d20fca30dad74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8f5xz.mount: Deactivated successfully. Sep 6 00:24:22.248775 systemd[1]: var-lib-kubelet-pods-db610cc0\x2d3792\x2d43d7\x2d9fce\x2df829377497ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47gr5.mount: Deactivated successfully. Sep 6 00:24:22.248840 systemd[1]: var-lib-kubelet-pods-db610cc0\x2d3792\x2d43d7\x2d9fce\x2df829377497ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:24:22.248891 systemd[1]: var-lib-kubelet-pods-db610cc0\x2d3792\x2d43d7\x2d9fce\x2df829377497ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:24:22.250816 env[1204]: time="2025-09-06T00:24:22.250746174Z" level=info msg="RemoveContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\"" Sep 6 00:24:22.257450 env[1204]: time="2025-09-06T00:24:22.257377892Z" level=info msg="RemoveContainer for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" returns successfully" Sep 6 00:24:22.257670 kubelet[1916]: I0906 00:24:22.257645 1916 scope.go:117] "RemoveContainer" containerID="0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24" Sep 6 00:24:22.257944 env[1204]: time="2025-09-06T00:24:22.257819008Z" level=error msg="ContainerStatus for \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\": not found" Sep 6 00:24:22.258765 kubelet[1916]: E0906 00:24:22.258335 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\": not found" containerID="0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24" Sep 6 00:24:22.258877 kubelet[1916]: I0906 00:24:22.258798 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24"} err="failed to get container status \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\": rpc error: code = NotFound desc = an error occurred when try to find container \"0153b21be62867ca401563fccfedbaea46dd474a1ec70a0e19cfd3c2a65aec24\": not found" Sep 6 00:24:22.258911 kubelet[1916]: I0906 00:24:22.258881 1916 scope.go:117] "RemoveContainer" containerID="6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505" Sep 6 00:24:22.260390 env[1204]: time="2025-09-06T00:24:22.260347700Z" level=info msg="RemoveContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\"" Sep 6 00:24:22.263364 env[1204]: time="2025-09-06T00:24:22.263321353Z" level=info msg="RemoveContainer for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" returns successfully" Sep 6 00:24:22.263551 kubelet[1916]: I0906 00:24:22.263520 1916 scope.go:117] "RemoveContainer" containerID="dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842" Sep 6 00:24:22.264592 env[1204]: time="2025-09-06T00:24:22.264558572Z" level=info msg="RemoveContainer for \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\"" Sep 6 00:24:22.267564 env[1204]: time="2025-09-06T00:24:22.267516877Z" level=info msg="RemoveContainer for \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\" returns successfully" Sep 6 00:24:22.267770 kubelet[1916]: I0906 00:24:22.267687 1916 scope.go:117] "RemoveContainer" containerID="a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588" Sep 6 00:24:22.268655 env[1204]: time="2025-09-06T00:24:22.268622384Z" level=info msg="RemoveContainer for \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\"" Sep 6 00:24:22.271530 env[1204]: time="2025-09-06T00:24:22.271493020Z" level=info msg="RemoveContainer for \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\" returns successfully" Sep 6 00:24:22.271672 kubelet[1916]: I0906 00:24:22.271646 1916 scope.go:117] "RemoveContainer" containerID="8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7" Sep 6 00:24:22.273686 env[1204]: time="2025-09-06T00:24:22.273660751Z" level=info msg="RemoveContainer for \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\"" Sep 6 00:24:22.277537 env[1204]: time="2025-09-06T00:24:22.277504251Z" level=info msg="RemoveContainer for \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\" returns successfully" Sep 6 00:24:22.278028 kubelet[1916]: I0906 00:24:22.277661 1916 scope.go:117] "RemoveContainer" containerID="8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624" Sep 6 00:24:22.278816 env[1204]: time="2025-09-06T00:24:22.278784002Z" level=info msg="RemoveContainer for \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\"" Sep 6 00:24:22.283623 env[1204]: time="2025-09-06T00:24:22.283563374Z" level=info msg="RemoveContainer for \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\" returns successfully" Sep 6 00:24:22.283796 kubelet[1916]: I0906 00:24:22.283775 1916 scope.go:117] "RemoveContainer" containerID="6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505" Sep 6 00:24:22.284006 env[1204]: time="2025-09-06T00:24:22.283950134Z" level=error msg="ContainerStatus for \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\": not found" Sep 6 00:24:22.284153 kubelet[1916]: E0906 00:24:22.284110 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\": not found" containerID="6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505" Sep 6 00:24:22.284258 kubelet[1916]: I0906 00:24:22.284153 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505"} err="failed to get container status \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a8feff3d4d38762dd2f091466dec7624af63128bc2d7aaff3bd7a8bdb92a505\": not found" Sep 6 00:24:22.284258 kubelet[1916]: I0906 00:24:22.284174 1916 scope.go:117] "RemoveContainer" containerID="dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842" Sep 6 00:24:22.284402 env[1204]: time="2025-09-06T00:24:22.284342977Z" level=error msg="ContainerStatus for \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\": not found" Sep 6 00:24:22.284547 kubelet[1916]: E0906 00:24:22.284521 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\": not found" containerID="dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842" Sep 6 00:24:22.284583 kubelet[1916]: I0906 00:24:22.284556 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842"} err="failed to get container status \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\": rpc error: code = NotFound desc = an error occurred when try to find container \"dddaf3b24e777c0bb9c9799ca8eafd3074faed67e559854455fda507df07a842\": not found" Sep 6 00:24:22.284610 kubelet[1916]: I0906 00:24:22.284584 1916 scope.go:117] "RemoveContainer" containerID="a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588" Sep 6 00:24:22.284844 env[1204]: time="2025-09-06T00:24:22.284797627Z" level=error msg="ContainerStatus for \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\": not found" Sep 6 00:24:22.284984 kubelet[1916]: E0906 00:24:22.284954 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\": not found" containerID="a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588" Sep 6 00:24:22.285039 kubelet[1916]: I0906 00:24:22.284991 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588"} err="failed to get container status \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9caa043d56bc6ee2e1edac35ebb767565e7adbf2219c43369dae18d80e55588\": not found" Sep 6 00:24:22.285039 kubelet[1916]: I0906 00:24:22.285017 1916 scope.go:117] "RemoveContainer" containerID="8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7" Sep 6 00:24:22.285246 env[1204]: time="2025-09-06T00:24:22.285195879Z" level=error msg="ContainerStatus for \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\": not found" Sep 6 00:24:22.285333 kubelet[1916]: E0906 00:24:22.285315 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\": not found" containerID="8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7" Sep 6 00:24:22.285369 kubelet[1916]: I0906 00:24:22.285333 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7"} err="failed to get container status \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8967f70c2684e2a9e2979d337515d3183bb5752249b81040ac752c999882f2d7\": not found" Sep 6 00:24:22.285369 kubelet[1916]: I0906 00:24:22.285345 1916 scope.go:117] "RemoveContainer" containerID="8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624" Sep 6 00:24:22.285527 env[1204]: time="2025-09-06T00:24:22.285473090Z" level=error msg="ContainerStatus for \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\": not found" Sep 6 00:24:22.285680 kubelet[1916]: E0906 00:24:22.285587 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\": not found" containerID="8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624" Sep 6 00:24:22.285680 kubelet[1916]: I0906 00:24:22.285603 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624"} err="failed to get container status \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e74520339b746de02d5ac9ac984eb412c80bf42421dc86820e8f2e6c5f5e624\": not found" Sep 6 00:24:23.212476 sshd[3562]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:23.215494 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:37258.service: Deactivated successfully. Sep 6 00:24:23.216060 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:24:23.216547 systemd-logind[1188]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:24:23.217766 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:33666.service. Sep 6 00:24:23.218527 systemd-logind[1188]: Removed session 25. Sep 6 00:24:23.254594 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 33666 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:23.255757 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:23.259958 systemd-logind[1188]: New session 26 of user core. Sep 6 00:24:23.260714 systemd[1]: Started session-26.scope. Sep 6 00:24:23.914431 sshd[3722]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:23.916028 kubelet[1916]: E0906 00:24:23.915987 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:23.917612 kubelet[1916]: I0906 00:24:23.917165 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a05f9b-ea4d-4afc-ae6b-20fca30dad74" path="/var/lib/kubelet/pods/96a05f9b-ea4d-4afc-ae6b-20fca30dad74/volumes" Sep 6 00:24:23.917612 kubelet[1916]: I0906 00:24:23.917563 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db610cc0-3792-43d7-9fce-f829377497ca" path="/var/lib/kubelet/pods/db610cc0-3792-43d7-9fce-f829377497ca/volumes" Sep 6 00:24:23.918688 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:33672.service. Sep 6 00:24:23.928196 systemd-logind[1188]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:24:23.928673 kubelet[1916]: I0906 00:24:23.928391 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="96a05f9b-ea4d-4afc-ae6b-20fca30dad74" containerName="cilium-operator" Sep 6 00:24:23.928673 kubelet[1916]: I0906 00:24:23.928423 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="db610cc0-3792-43d7-9fce-f829377497ca" containerName="cilium-agent" Sep 6 00:24:23.934154 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:33666.service: Deactivated successfully. Sep 6 00:24:23.934862 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:24:23.936755 systemd-logind[1188]: Removed session 26. Sep 6 00:24:23.939652 systemd[1]: Created slice kubepods-burstable-podde2ea3d6_ede0_4cd6_9074_68c77eceb37e.slice. Sep 6 00:24:23.972646 kubelet[1916]: I0906 00:24:23.972596 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-net\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972658 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-bpf-maps\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972678 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-run\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972689 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hubble-tls\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972703 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cni-path\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972715 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7646c\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-kube-api-access-7646c\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.972868 kubelet[1916]: I0906 00:24:23.972752 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-cgroup\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973017 kubelet[1916]: I0906 00:24:23.972770 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-config-path\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973017 kubelet[1916]: I0906 00:24:23.972787 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hostproc\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973017 kubelet[1916]: I0906 00:24:23.972798 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-lib-modules\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973017 kubelet[1916]: I0906 00:24:23.972810 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-clustermesh-secrets\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973017 kubelet[1916]: I0906 00:24:23.972825 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-ipsec-secrets\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973133 kubelet[1916]: I0906 00:24:23.972836 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-kernel\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973133 kubelet[1916]: I0906 00:24:23.972850 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-etc-cni-netd\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.973133 kubelet[1916]: I0906 00:24:23.972865 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-xtables-lock\") pod \"cilium-72pmp\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " pod="kube-system/cilium-72pmp" Sep 6 00:24:23.977963 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 33672 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:23.979321 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:23.986515 systemd[1]: Started session-27.scope. Sep 6 00:24:23.987684 systemd-logind[1188]: New session 27 of user core. Sep 6 00:24:24.111081 kubelet[1916]: E0906 00:24:24.110224 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:24:24.131805 sshd[3733]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:24.134591 systemd[1]: Started sshd@27-10.0.0.130:22-10.0.0.1:33674.service. Sep 6 00:24:24.136167 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:33672.service: Deactivated successfully. Sep 6 00:24:24.136779 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:24:24.140080 systemd-logind[1188]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:24:24.141700 kubelet[1916]: E0906 00:24:24.141677 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:24.144519 systemd-logind[1188]: Removed session 27. Sep 6 00:24:24.145473 env[1204]: time="2025-09-06T00:24:24.145417257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72pmp,Uid:de2ea3d6-ede0-4cd6-9074-68c77eceb37e,Namespace:kube-system,Attempt:0,}" Sep 6 00:24:24.158997 env[1204]: time="2025-09-06T00:24:24.158902214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:24:24.158997 env[1204]: time="2025-09-06T00:24:24.158937541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:24:24.158997 env[1204]: time="2025-09-06T00:24:24.158947290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:24:24.159180 env[1204]: time="2025-09-06T00:24:24.159109651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11 pid=3761 runtime=io.containerd.runc.v2 Sep 6 00:24:24.169851 systemd[1]: Started cri-containerd-8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11.scope. Sep 6 00:24:24.173286 sshd[3752]: Accepted publickey for core from 10.0.0.1 port 33674 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:24:24.174921 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:24:24.180214 systemd[1]: Started session-28.scope. Sep 6 00:24:24.182133 systemd-logind[1188]: New session 28 of user core. Sep 6 00:24:24.197525 env[1204]: time="2025-09-06T00:24:24.195938022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72pmp,Uid:de2ea3d6-ede0-4cd6-9074-68c77eceb37e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\"" Sep 6 00:24:24.197602 kubelet[1916]: E0906 00:24:24.196563 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:24.198337 env[1204]: time="2025-09-06T00:24:24.198300031Z" level=info msg="CreateContainer within sandbox \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:24:24.210118 env[1204]: time="2025-09-06T00:24:24.210086067Z" level=info msg="CreateContainer within sandbox \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\"" Sep 6 00:24:24.210571 env[1204]: time="2025-09-06T00:24:24.210538242Z" level=info msg="StartContainer for \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\"" Sep 6 00:24:24.226149 systemd[1]: Started cri-containerd-0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb.scope. Sep 6 00:24:24.236865 systemd[1]: cri-containerd-0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb.scope: Deactivated successfully. Sep 6 00:24:24.237112 systemd[1]: Stopped cri-containerd-0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb.scope. Sep 6 00:24:24.259145 env[1204]: time="2025-09-06T00:24:24.259071905Z" level=info msg="shim disconnected" id=0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb Sep 6 00:24:24.259478 env[1204]: time="2025-09-06T00:24:24.259451312Z" level=warning msg="cleaning up after shim disconnected" id=0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb namespace=k8s.io Sep 6 00:24:24.259612 env[1204]: time="2025-09-06T00:24:24.259589606Z" level=info msg="cleaning up dead shim" Sep 6 00:24:24.266568 env[1204]: time="2025-09-06T00:24:24.266526847Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3826 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:24:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:24:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:24:24.267154 env[1204]: time="2025-09-06T00:24:24.267005392Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" Sep 6 00:24:24.267365 env[1204]: time="2025-09-06T00:24:24.267302341Z" level=error msg="Failed to pipe stderr of container \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\"" error="reading from a closed fifo" Sep 6 00:24:24.270894 env[1204]: time="2025-09-06T00:24:24.270820450Z" level=error msg="Failed to pipe stdout of container \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\"" error="reading from a closed fifo" Sep 6 00:24:24.273036 env[1204]: time="2025-09-06T00:24:24.272982036Z" level=error msg="StartContainer for \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:24:24.273299 kubelet[1916]: E0906 00:24:24.273243 1916 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb" Sep 6 00:24:24.274420 kubelet[1916]: E0906 00:24:24.274360 1916 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 00:24:24.274420 kubelet[1916]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:24:24.274420 kubelet[1916]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:24:24.274420 kubelet[1916]: rm /hostbin/cilium-mount Sep 6 00:24:24.274582 kubelet[1916]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7646c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-72pmp_kube-system(de2ea3d6-ede0-4cd6-9074-68c77eceb37e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:24:24.274582 kubelet[1916]: > logger="UnhandledError" Sep 6 00:24:24.275624 kubelet[1916]: E0906 00:24:24.275544 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-72pmp" podUID="de2ea3d6-ede0-4cd6-9074-68c77eceb37e" Sep 6 00:24:24.914834 kubelet[1916]: E0906 00:24:24.914794 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:25.264336 env[1204]: time="2025-09-06T00:24:25.264219690Z" level=info msg="StopPodSandbox for \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\"" Sep 6 00:24:25.264336 env[1204]: time="2025-09-06T00:24:25.264305484Z" level=info msg="Container to stop \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:24:25.266195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11-shm.mount: Deactivated successfully. Sep 6 00:24:25.273593 systemd[1]: cri-containerd-8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11.scope: Deactivated successfully. Sep 6 00:24:25.292225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11-rootfs.mount: Deactivated successfully. Sep 6 00:24:25.295962 env[1204]: time="2025-09-06T00:24:25.295908291Z" level=info msg="shim disconnected" id=8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11 Sep 6 00:24:25.296090 env[1204]: time="2025-09-06T00:24:25.295972423Z" level=warning msg="cleaning up after shim disconnected" id=8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11 namespace=k8s.io Sep 6 00:24:25.296090 env[1204]: time="2025-09-06T00:24:25.295989476Z" level=info msg="cleaning up dead shim" Sep 6 00:24:25.302551 env[1204]: time="2025-09-06T00:24:25.302517268Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Sep 6 00:24:25.302907 env[1204]: time="2025-09-06T00:24:25.302875342Z" level=info msg="TearDown network for sandbox \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\" successfully" Sep 6 00:24:25.302981 env[1204]: time="2025-09-06T00:24:25.302906162Z" level=info msg="StopPodSandbox for \"8bb2199229896cba79fd8ba4bf8b6a5353a1ebdc2a3519a178c005cd9dd50d11\" returns successfully" Sep 6 00:24:25.381614 kubelet[1916]: I0906 00:24:25.381566 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7646c\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-kube-api-access-7646c\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.381614 kubelet[1916]: I0906 00:24:25.381611 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-bpf-maps\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381634 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-etc-cni-netd\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381655 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cni-path\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381677 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-clustermesh-secrets\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381697 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-run\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381672 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381720 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-config-path\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381694 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cni-path" (OuterVolumeSpecName: "cni-path") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381760 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-lib-modules\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381780 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-net\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381804 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hubble-tls\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381822 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hostproc\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381842 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-ipsec-secrets\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381859 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-kernel\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381883 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-xtables-lock\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381913 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-cgroup\") pod \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\" (UID: \"de2ea3d6-ede0-4cd6-9074-68c77eceb37e\") " Sep 6 00:24:25.382006 kubelet[1916]: I0906 00:24:25.381952 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.382446 kubelet[1916]: I0906 00:24:25.381964 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.382446 kubelet[1916]: I0906 00:24:25.381995 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.382446 kubelet[1916]: I0906 00:24:25.382020 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.382446 kubelet[1916]: I0906 00:24:25.382108 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.382446 kubelet[1916]: I0906 00:24:25.382140 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hostproc" (OuterVolumeSpecName: "hostproc") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.384173 kubelet[1916]: I0906 00:24:25.382195 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.384173 kubelet[1916]: I0906 00:24:25.382212 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.384173 kubelet[1916]: I0906 00:24:25.382229 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.384173 kubelet[1916]: I0906 00:24:25.383063 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:24:25.385438 kubelet[1916]: I0906 00:24:25.385420 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:24:25.385696 systemd[1]: var-lib-kubelet-pods-de2ea3d6\x2dede0\x2d4cd6\x2d9074\x2d68c77eceb37e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7646c.mount: Deactivated successfully. Sep 6 00:24:25.387602 systemd[1]: var-lib-kubelet-pods-de2ea3d6\x2dede0\x2d4cd6\x2d9074\x2d68c77eceb37e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:24:25.388435 kubelet[1916]: I0906 00:24:25.388241 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:24:25.388435 kubelet[1916]: I0906 00:24:25.388335 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:24:25.388622 kubelet[1916]: I0906 00:24:25.388593 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-kube-api-access-7646c" (OuterVolumeSpecName: "kube-api-access-7646c") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "kube-api-access-7646c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:24:25.389094 kubelet[1916]: I0906 00:24:25.389058 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de2ea3d6-ede0-4cd6-9074-68c77eceb37e" (UID: "de2ea3d6-ede0-4cd6-9074-68c77eceb37e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:24:25.482989 kubelet[1916]: I0906 00:24:25.482949 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.482989 kubelet[1916]: I0906 00:24:25.482969 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.482989 kubelet[1916]: I0906 00:24:25.482984 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483002 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483013 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483024 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483033 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483046 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7646c\" (UniqueName: \"kubernetes.io/projected/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-kube-api-access-7646c\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483055 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483088 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483101 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483111 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.483145 kubelet[1916]: I0906 00:24:25.483121 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de2ea3d6-ede0-4cd6-9074-68c77eceb37e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:24:25.626108 kubelet[1916]: I0906 00:24:25.626054 1916 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:24:25Z","lastTransitionTime":"2025-09-06T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:24:25.919558 systemd[1]: Removed slice kubepods-burstable-podde2ea3d6_ede0_4cd6_9074_68c77eceb37e.slice. Sep 6 00:24:26.079189 systemd[1]: var-lib-kubelet-pods-de2ea3d6\x2dede0\x2d4cd6\x2d9074\x2d68c77eceb37e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:24:26.079313 systemd[1]: var-lib-kubelet-pods-de2ea3d6\x2dede0\x2d4cd6\x2d9074\x2d68c77eceb37e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:24:26.267049 kubelet[1916]: I0906 00:24:26.266922 1916 scope.go:117] "RemoveContainer" containerID="0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb" Sep 6 00:24:26.268070 env[1204]: time="2025-09-06T00:24:26.268028931Z" level=info msg="RemoveContainer for \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\"" Sep 6 00:24:26.271279 env[1204]: time="2025-09-06T00:24:26.271235009Z" level=info msg="RemoveContainer for \"0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb\" returns successfully" Sep 6 00:24:26.293380 kubelet[1916]: I0906 00:24:26.293334 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="de2ea3d6-ede0-4cd6-9074-68c77eceb37e" containerName="mount-cgroup" Sep 6 00:24:26.294700 kubelet[1916]: I0906 00:24:26.294565 1916 status_manager.go:890] "Failed to get status for pod" podUID="bfd0c8db-f42f-4026-bf40-609adae33b84" pod="kube-system/cilium-rgc74" err="pods \"cilium-rgc74\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 6 00:24:26.295122 kubelet[1916]: W0906 00:24:26.295097 1916 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:24:26.295172 kubelet[1916]: E0906 00:24:26.295133 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:24:26.295172 kubelet[1916]: W0906 00:24:26.295168 1916 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:24:26.295241 kubelet[1916]: E0906 00:24:26.295177 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:24:26.295241 kubelet[1916]: W0906 00:24:26.295212 1916 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:24:26.295241 kubelet[1916]: E0906 00:24:26.295220 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:24:26.295323 kubelet[1916]: W0906 00:24:26.295305 1916 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:24:26.295323 kubelet[1916]: E0906 00:24:26.295316 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:24:26.298243 systemd[1]: Created slice kubepods-burstable-podbfd0c8db_f42f_4026_bf40_609adae33b84.slice. Sep 6 00:24:26.388545 kubelet[1916]: I0906 00:24:26.388486 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-etc-cni-netd\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388545 kubelet[1916]: I0906 00:24:26.388537 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfd0c8db-f42f-4026-bf40-609adae33b84-clustermesh-secrets\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388558 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-host-proc-sys-kernel\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388604 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-xtables-lock\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388620 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfd0c8db-f42f-4026-bf40-609adae33b84-hubble-tls\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388690 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-cilium-cgroup\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388741 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfd0c8db-f42f-4026-bf40-609adae33b84-cilium-config-path\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388758 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bfd0c8db-f42f-4026-bf40-609adae33b84-cilium-ipsec-secrets\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388774 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-host-proc-sys-net\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388790 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-cni-path\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388802 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-bpf-maps\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388817 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf8qf\" (UniqueName: \"kubernetes.io/projected/bfd0c8db-f42f-4026-bf40-609adae33b84-kube-api-access-cf8qf\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388856 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-cilium-run\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388886 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-lib-modules\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:26.388988 kubelet[1916]: I0906 00:24:26.388925 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfd0c8db-f42f-4026-bf40-609adae33b84-hostproc\") pod \"cilium-rgc74\" (UID: \"bfd0c8db-f42f-4026-bf40-609adae33b84\") " pod="kube-system/cilium-rgc74" Sep 6 00:24:27.362859 kubelet[1916]: W0906 00:24:27.362788 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2ea3d6_ede0_4cd6_9074_68c77eceb37e.slice/cri-containerd-0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb.scope WatchSource:0}: container "0dc3ccf94f78dea04e0ca38295e89b2ca273a757cbc6d0c254bf8b6b74f8abbb" in namespace "k8s.io": not found Sep 6 00:24:27.491178 kubelet[1916]: E0906 00:24:27.491128 1916 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:24:27.491528 kubelet[1916]: E0906 00:24:27.491137 1916 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 6 00:24:27.491528 kubelet[1916]: E0906 00:24:27.491260 1916 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-rgc74: failed to sync secret cache: timed out waiting for the condition Sep 6 00:24:27.491528 kubelet[1916]: E0906 00:24:27.491239 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfd0c8db-f42f-4026-bf40-609adae33b84-clustermesh-secrets podName:bfd0c8db-f42f-4026-bf40-609adae33b84 nodeName:}" failed. No retries permitted until 2025-09-06 00:24:27.991207305 +0000 UTC m=+84.165417189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/bfd0c8db-f42f-4026-bf40-609adae33b84-clustermesh-secrets") pod "cilium-rgc74" (UID: "bfd0c8db-f42f-4026-bf40-609adae33b84") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:24:27.491528 kubelet[1916]: E0906 00:24:27.491343 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bfd0c8db-f42f-4026-bf40-609adae33b84-hubble-tls podName:bfd0c8db-f42f-4026-bf40-609adae33b84 nodeName:}" failed. No retries permitted until 2025-09-06 00:24:27.991322725 +0000 UTC m=+84.165532509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/bfd0c8db-f42f-4026-bf40-609adae33b84-hubble-tls") pod "cilium-rgc74" (UID: "bfd0c8db-f42f-4026-bf40-609adae33b84") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:24:27.916596 kubelet[1916]: I0906 00:24:27.916538 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2ea3d6-ede0-4cd6-9074-68c77eceb37e" path="/var/lib/kubelet/pods/de2ea3d6-ede0-4cd6-9074-68c77eceb37e/volumes" Sep 6 00:24:28.100930 kubelet[1916]: E0906 00:24:28.100889 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:28.101554 env[1204]: time="2025-09-06T00:24:28.101478364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rgc74,Uid:bfd0c8db-f42f-4026-bf40-609adae33b84,Namespace:kube-system,Attempt:0,}" Sep 6 00:24:28.145520 env[1204]: time="2025-09-06T00:24:28.145434003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:24:28.145520 env[1204]: time="2025-09-06T00:24:28.145478648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:24:28.145520 env[1204]: time="2025-09-06T00:24:28.145488988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:24:28.145710 env[1204]: time="2025-09-06T00:24:28.145655826Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7 pid=3888 runtime=io.containerd.runc.v2 Sep 6 00:24:28.162227 systemd[1]: run-containerd-runc-k8s.io-dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7-runc.4ToXqZ.mount: Deactivated successfully. Sep 6 00:24:28.163815 systemd[1]: Started cri-containerd-dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7.scope. Sep 6 00:24:28.183316 env[1204]: time="2025-09-06T00:24:28.182940366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rgc74,Uid:bfd0c8db-f42f-4026-bf40-609adae33b84,Namespace:kube-system,Attempt:0,} returns sandbox id \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\"" Sep 6 00:24:28.183671 kubelet[1916]: E0906 00:24:28.183646 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:28.185625 env[1204]: time="2025-09-06T00:24:28.185594013Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:24:28.197156 env[1204]: time="2025-09-06T00:24:28.197113180Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32\"" Sep 6 00:24:28.197578 env[1204]: time="2025-09-06T00:24:28.197540436Z" level=info msg="StartContainer for \"6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32\"" Sep 6 00:24:28.213864 systemd[1]: Started cri-containerd-6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32.scope. Sep 6 00:24:28.239769 env[1204]: time="2025-09-06T00:24:28.239000108Z" level=info msg="StartContainer for \"6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32\" returns successfully" Sep 6 00:24:28.246681 systemd[1]: cri-containerd-6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32.scope: Deactivated successfully. Sep 6 00:24:28.273059 kubelet[1916]: E0906 00:24:28.272784 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:28.273606 env[1204]: time="2025-09-06T00:24:28.273546478Z" level=info msg="shim disconnected" id=6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32 Sep 6 00:24:28.273606 env[1204]: time="2025-09-06T00:24:28.273595802Z" level=warning msg="cleaning up after shim disconnected" id=6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32 namespace=k8s.io Sep 6 00:24:28.273606 env[1204]: time="2025-09-06T00:24:28.273604659Z" level=info msg="cleaning up dead shim" Sep 6 00:24:28.310548 env[1204]: time="2025-09-06T00:24:28.310491288Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" Sep 6 00:24:29.111406 kubelet[1916]: E0906 00:24:29.111363 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:24:29.157385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d9a041cbf6ece8df9670857d23daaf1e56862ae42c41c5f6cb908b94869de32-rootfs.mount: Deactivated successfully. Sep 6 00:24:29.276094 kubelet[1916]: E0906 00:24:29.276066 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:29.277812 env[1204]: time="2025-09-06T00:24:29.277769509Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:24:29.296136 env[1204]: time="2025-09-06T00:24:29.296072988Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32\"" Sep 6 00:24:29.296671 env[1204]: time="2025-09-06T00:24:29.296639760Z" level=info msg="StartContainer for \"efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32\"" Sep 6 00:24:29.312713 systemd[1]: Started cri-containerd-efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32.scope. Sep 6 00:24:29.335609 env[1204]: time="2025-09-06T00:24:29.335560363Z" level=info msg="StartContainer for \"efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32\" returns successfully" Sep 6 00:24:29.340272 systemd[1]: cri-containerd-efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32.scope: Deactivated successfully. Sep 6 00:24:29.359935 env[1204]: time="2025-09-06T00:24:29.359888732Z" level=info msg="shim disconnected" id=efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32 Sep 6 00:24:29.359935 env[1204]: time="2025-09-06T00:24:29.359932385Z" level=warning msg="cleaning up after shim disconnected" id=efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32 namespace=k8s.io Sep 6 00:24:29.359935 env[1204]: time="2025-09-06T00:24:29.359940822Z" level=info msg="cleaning up dead shim" Sep 6 00:24:29.366612 env[1204]: time="2025-09-06T00:24:29.366541331Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4034 runtime=io.containerd.runc.v2\n" Sep 6 00:24:30.157617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efea2a75ad20b355a9b5ec5929f7db1fe795da18beed61d55524bc74cd38bc32-rootfs.mount: Deactivated successfully. Sep 6 00:24:30.279885 kubelet[1916]: E0906 00:24:30.279847 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:30.281436 env[1204]: time="2025-09-06T00:24:30.281377190Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:24:30.295628 env[1204]: time="2025-09-06T00:24:30.295567937Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63\"" Sep 6 00:24:30.296152 env[1204]: time="2025-09-06T00:24:30.296111725Z" level=info msg="StartContainer for \"ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63\"" Sep 6 00:24:30.313763 systemd[1]: Started cri-containerd-ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63.scope. Sep 6 00:24:30.341109 env[1204]: time="2025-09-06T00:24:30.341058187Z" level=info msg="StartContainer for \"ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63\" returns successfully" Sep 6 00:24:30.347890 systemd[1]: cri-containerd-ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63.scope: Deactivated successfully. Sep 6 00:24:30.370537 env[1204]: time="2025-09-06T00:24:30.370465839Z" level=info msg="shim disconnected" id=ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63 Sep 6 00:24:30.370537 env[1204]: time="2025-09-06T00:24:30.370526484Z" level=warning msg="cleaning up after shim disconnected" id=ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63 namespace=k8s.io Sep 6 00:24:30.370537 env[1204]: time="2025-09-06T00:24:30.370539198Z" level=info msg="cleaning up dead shim" Sep 6 00:24:30.377345 env[1204]: time="2025-09-06T00:24:30.377298105Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4090 runtime=io.containerd.runc.v2\n" Sep 6 00:24:31.157793 systemd[1]: run-containerd-runc-k8s.io-ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63-runc.bVeYEx.mount: Deactivated successfully. Sep 6 00:24:31.157924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce815abdc43715d24d3b4fc805f339d7a0a9c424706b0fc57d6c5018efb9ae63-rootfs.mount: Deactivated successfully. Sep 6 00:24:31.283583 kubelet[1916]: E0906 00:24:31.283532 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:31.285592 env[1204]: time="2025-09-06T00:24:31.285546642Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:24:31.298797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442289685.mount: Deactivated successfully. Sep 6 00:24:31.300909 env[1204]: time="2025-09-06T00:24:31.300858086Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5\"" Sep 6 00:24:31.301438 env[1204]: time="2025-09-06T00:24:31.301409047Z" level=info msg="StartContainer for \"46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5\"" Sep 6 00:24:31.319985 systemd[1]: Started cri-containerd-46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5.scope. Sep 6 00:24:31.344793 systemd[1]: cri-containerd-46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5.scope: Deactivated successfully. Sep 6 00:24:31.345869 env[1204]: time="2025-09-06T00:24:31.345833289Z" level=info msg="StartContainer for \"46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5\" returns successfully" Sep 6 00:24:31.366326 env[1204]: time="2025-09-06T00:24:31.366263858Z" level=info msg="shim disconnected" id=46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5 Sep 6 00:24:31.366326 env[1204]: time="2025-09-06T00:24:31.366322410Z" level=warning msg="cleaning up after shim disconnected" id=46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5 namespace=k8s.io Sep 6 00:24:31.366326 env[1204]: time="2025-09-06T00:24:31.366332749Z" level=info msg="cleaning up dead shim" Sep 6 00:24:31.373176 env[1204]: time="2025-09-06T00:24:31.373115426Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:24:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4145 runtime=io.containerd.runc.v2\n" Sep 6 00:24:32.157833 systemd[1]: run-containerd-runc-k8s.io-46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5-runc.ROovhx.mount: Deactivated successfully. Sep 6 00:24:32.157930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46f116037dfb4f4fce91f0c7bd42deefd1f0ab891e8f7630108c1566bce701f5-rootfs.mount: Deactivated successfully. Sep 6 00:24:32.287808 kubelet[1916]: E0906 00:24:32.287778 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:32.291280 env[1204]: time="2025-09-06T00:24:32.291222156Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:24:32.309206 env[1204]: time="2025-09-06T00:24:32.309148015Z" level=info msg="CreateContainer within sandbox \"dda7ad6bf99f6b5641d7a4cecdc6f2f51ab74aecc4eb4443057ec99616ae36e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae\"" Sep 6 00:24:32.309741 env[1204]: time="2025-09-06T00:24:32.309699246Z" level=info msg="StartContainer for \"5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae\"" Sep 6 00:24:32.326920 systemd[1]: Started cri-containerd-5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae.scope. Sep 6 00:24:32.353002 env[1204]: time="2025-09-06T00:24:32.352931304Z" level=info msg="StartContainer for \"5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae\" returns successfully" Sep 6 00:24:32.627771 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:24:33.292956 kubelet[1916]: E0906 00:24:33.292916 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:33.307123 kubelet[1916]: I0906 00:24:33.307055 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rgc74" podStartSLOduration=7.307023763 podStartE2EDuration="7.307023763s" podCreationTimestamp="2025-09-06 00:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:24:33.306483022 +0000 UTC m=+89.480692816" watchObservedRunningTime="2025-09-06 00:24:33.307023763 +0000 UTC m=+89.481233557" Sep 6 00:24:34.294721 kubelet[1916]: E0906 00:24:34.294668 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:34.583059 systemd[1]: run-containerd-runc-k8s.io-5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae-runc.n176UG.mount: Deactivated successfully. Sep 6 00:24:35.242661 systemd-networkd[1025]: lxc_health: Link UP Sep 6 00:24:35.252874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:24:35.252723 systemd-networkd[1025]: lxc_health: Gained carrier Sep 6 00:24:36.102328 kubelet[1916]: E0906 00:24:36.102278 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:36.297762 kubelet[1916]: E0906 00:24:36.297690 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:36.674089 systemd[1]: run-containerd-runc-k8s.io-5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae-runc.OhzAVw.mount: Deactivated successfully. Sep 6 00:24:36.809969 systemd-networkd[1025]: lxc_health: Gained IPv6LL Sep 6 00:24:38.761764 systemd[1]: run-containerd-runc-k8s.io-5e829980c5c34b95212ef1cae2f4b6ffea04868dde59cf17621a2da4940b88ae-runc.qFaWxd.mount: Deactivated successfully. Sep 6 00:24:39.914805 kubelet[1916]: E0906 00:24:39.914751 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:24:40.882498 sshd[3752]: pam_unix(sshd:session): session closed for user core Sep 6 00:24:40.885629 systemd[1]: sshd@27-10.0.0.130:22-10.0.0.1:33674.service: Deactivated successfully. Sep 6 00:24:40.886313 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 00:24:40.886806 systemd-logind[1188]: Session 28 logged out. Waiting for processes to exit. Sep 6 00:24:40.887513 systemd-logind[1188]: Removed session 28.