Sep 13 00:42:30.060865 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:42:30.060887 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:30.060895 kernel: BIOS-provided physical RAM map: Sep 13 00:42:30.060901 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:42:30.060906 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:42:30.060912 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:42:30.060919 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:42:30.060924 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:42:30.060931 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:42:30.060937 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:42:30.060942 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:42:30.060948 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:42:30.060953 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:42:30.060959 kernel: NX (Execute Disable) protection: active Sep 13 00:42:30.060967 kernel: SMBIOS 2.8 present. Sep 13 00:42:30.060974 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:42:30.060980 kernel: Hypervisor detected: KVM Sep 13 00:42:30.060985 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:42:30.060994 kernel: kvm-clock: cpu 0, msr 2f19f001, primary cpu clock Sep 13 00:42:30.061001 kernel: kvm-clock: using sched offset of 3098682762 cycles Sep 13 00:42:30.061007 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:42:30.061013 kernel: tsc: Detected 2794.750 MHz processor Sep 13 00:42:30.061020 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:42:30.061027 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:42:30.061034 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:42:30.061040 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:42:30.061046 kernel: Using GB pages for direct mapping Sep 13 00:42:30.061052 kernel: ACPI: Early table checksum verification disabled Sep 13 00:42:30.061058 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:42:30.061065 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061071 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061077 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061084 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:42:30.061090 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061097 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061103 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061109 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:42:30.061115 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:42:30.061121 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:42:30.061135 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:42:30.061145 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:42:30.061151 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:42:30.061158 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:42:30.061165 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:42:30.061171 kernel: No NUMA configuration found Sep 13 00:42:30.061178 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:42:30.061186 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:42:30.061192 kernel: Zone ranges: Sep 13 00:42:30.061199 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:42:30.061206 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:42:30.061212 kernel: Normal empty Sep 13 00:42:30.061219 kernel: Movable zone start for each node Sep 13 00:42:30.061225 kernel: Early memory node ranges Sep 13 00:42:30.061232 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:42:30.061238 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:42:30.061246 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:42:30.061255 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:42:30.061262 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:42:30.061268 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:42:30.061275 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:42:30.061281 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:42:30.061288 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:42:30.061294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:42:30.061301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:42:30.061308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:42:30.061318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:42:30.061324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:42:30.061331 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:42:30.061337 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:42:30.061344 kernel: TSC deadline timer available Sep 13 00:42:30.061350 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:42:30.061357 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:42:30.061363 kernel: kvm-guest: setup PV sched yield Sep 13 00:42:30.061370 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:42:30.061378 kernel: Booting paravirtualized kernel on KVM Sep 13 00:42:30.061385 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:42:30.061391 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:42:30.061399 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:42:30.061409 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:42:30.061423 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:42:30.061432 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:42:30.061446 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 13 00:42:30.061456 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:42:30.061468 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:42:30.061475 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:42:30.061481 kernel: Policy zone: DMA32 Sep 13 00:42:30.061489 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:30.061496 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:42:30.061503 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:42:30.061509 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:30.061516 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:42:30.061524 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 13 00:42:30.061531 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:42:30.061537 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:42:30.061544 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:42:30.061550 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:42:30.061557 kernel: rcu: RCU event tracing is enabled. Sep 13 00:42:30.061564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:42:30.061571 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:42:30.061577 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:42:30.061585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:42:30.061592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:42:30.061599 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:42:30.061605 kernel: random: crng init done Sep 13 00:42:30.061611 kernel: Console: colour VGA+ 80x25 Sep 13 00:42:30.061618 kernel: printk: console [ttyS0] enabled Sep 13 00:42:30.061624 kernel: ACPI: Core revision 20210730 Sep 13 00:42:30.061631 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:42:30.061638 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:42:30.061645 kernel: x2apic enabled Sep 13 00:42:30.061652 kernel: Switched APIC routing to physical x2apic. Sep 13 00:42:30.061662 kernel: kvm-guest: setup PV IPIs Sep 13 00:42:30.061669 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:42:30.061676 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:42:30.061706 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 13 00:42:30.061713 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:42:30.061719 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:42:30.061726 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:42:30.061740 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:42:30.061747 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:42:30.061754 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:42:30.061762 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:42:30.061769 kernel: active return thunk: retbleed_return_thunk Sep 13 00:42:30.061775 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:42:30.061782 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:42:30.061789 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:42:30.061796 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:42:30.061804 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:42:30.061811 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:42:30.061818 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:42:30.061825 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:42:30.061832 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:42:30.061839 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:42:30.061845 kernel: LSM: Security Framework initializing Sep 13 00:42:30.061852 kernel: SELinux: Initializing. Sep 13 00:42:30.061860 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:42:30.061867 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:42:30.061874 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:42:30.061881 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:42:30.061888 kernel: ... version: 0 Sep 13 00:42:30.061895 kernel: ... bit width: 48 Sep 13 00:42:30.061902 kernel: ... generic registers: 6 Sep 13 00:42:30.061908 kernel: ... value mask: 0000ffffffffffff Sep 13 00:42:30.061915 kernel: ... max period: 00007fffffffffff Sep 13 00:42:30.061923 kernel: ... fixed-purpose events: 0 Sep 13 00:42:30.061930 kernel: ... event mask: 000000000000003f Sep 13 00:42:30.061937 kernel: signal: max sigframe size: 1776 Sep 13 00:42:30.061944 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:42:30.061950 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:42:30.061957 kernel: x86: Booting SMP configuration: Sep 13 00:42:30.061964 kernel: .... node #0, CPUs: #1 Sep 13 00:42:30.061971 kernel: kvm-clock: cpu 1, msr 2f19f041, secondary cpu clock Sep 13 00:42:30.061978 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:42:30.061986 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 13 00:42:30.061993 kernel: #2 Sep 13 00:42:30.062000 kernel: kvm-clock: cpu 2, msr 2f19f081, secondary cpu clock Sep 13 00:42:30.062007 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:42:30.062013 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 13 00:42:30.062023 kernel: #3 Sep 13 00:42:30.062030 kernel: kvm-clock: cpu 3, msr 2f19f0c1, secondary cpu clock Sep 13 00:42:30.062037 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:42:30.062044 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 13 00:42:30.062052 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:42:30.062059 kernel: smpboot: Max logical packages: 1 Sep 13 00:42:30.062066 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 13 00:42:30.062072 kernel: devtmpfs: initialized Sep 13 00:42:30.062079 kernel: x86/mm: Memory block size: 128MB Sep 13 00:42:30.062086 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:42:30.062093 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:42:30.062100 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:42:30.062107 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:42:30.062115 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:42:30.062122 kernel: audit: type=2000 audit(1757724149.493:1): state=initialized audit_enabled=0 res=1 Sep 13 00:42:30.062136 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:42:30.062143 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:42:30.062150 kernel: cpuidle: using governor menu Sep 13 00:42:30.062157 kernel: ACPI: bus type PCI registered Sep 13 00:42:30.062163 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:42:30.062170 kernel: dca service started, version 1.12.1 Sep 13 00:42:30.062179 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:42:30.062187 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:42:30.062194 kernel: PCI: Using configuration type 1 for base access Sep 13 00:42:30.062201 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:42:30.062207 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:42:30.062214 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:42:30.062221 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:42:30.062228 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:42:30.062235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:42:30.062242 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:42:30.062250 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:42:30.062257 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:42:30.062263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:42:30.062270 kernel: ACPI: Interpreter enabled Sep 13 00:42:30.062277 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:42:30.062284 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:42:30.062291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:42:30.062298 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:42:30.062305 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:42:30.062448 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:42:30.062528 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:42:30.062603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:42:30.062612 kernel: PCI host bridge to bus 0000:00 Sep 13 00:42:30.062726 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:42:30.062797 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:42:30.062868 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:42:30.062939 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:42:30.063046 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:42:30.063176 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:42:30.063619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:42:30.063747 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:42:30.063840 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:42:30.063920 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:42:30.063992 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:42:30.064064 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:42:30.064146 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:42:30.064253 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:42:30.064332 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:42:30.064410 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:42:30.064509 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:42:30.064600 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:42:30.064717 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:42:30.064815 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:42:30.064935 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:42:30.065063 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:42:30.065170 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:42:30.065251 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:42:30.065368 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:42:30.065484 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:42:30.065632 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:42:30.065778 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:42:30.065928 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:42:30.066047 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:42:30.066178 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:42:30.066309 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:42:30.066422 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:42:30.066433 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:42:30.066455 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:42:30.066463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:42:30.066470 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:42:30.066481 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:42:30.066487 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:42:30.066494 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:42:30.066501 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:42:30.066522 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:42:30.066529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:42:30.066536 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:42:30.066543 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:42:30.066550 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:42:30.066559 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:42:30.066579 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:42:30.066587 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:42:30.066593 kernel: iommu: Default domain type: Translated Sep 13 00:42:30.066601 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:42:30.066751 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:42:30.066864 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:42:30.066960 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:42:30.066973 kernel: vgaarb: loaded Sep 13 00:42:30.066981 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:42:30.066991 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:42:30.066999 kernel: PTP clock support registered Sep 13 00:42:30.067006 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:42:30.067013 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:42:30.067020 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:42:30.067027 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:42:30.067034 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:42:30.067043 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:42:30.067050 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:42:30.067057 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:42:30.067064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:42:30.067071 kernel: pnp: PnP ACPI init Sep 13 00:42:30.067175 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:42:30.067186 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:42:30.067194 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:42:30.067201 kernel: NET: Registered PF_INET protocol family Sep 13 00:42:30.067210 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:30.067217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:42:30.067224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:42:30.067231 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:42:30.067238 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:42:30.067245 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:42:30.067252 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:42:30.067259 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:42:30.067267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:42:30.067274 kernel: NET: Registered PF_XDP protocol family Sep 13 00:42:30.067341 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:42:30.067406 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:42:30.067469 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:42:30.067534 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:42:30.067599 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:42:30.067663 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:42:30.067672 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:42:30.067694 kernel: Initialise system trusted keyrings Sep 13 00:42:30.067701 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:42:30.067708 kernel: Key type asymmetric registered Sep 13 00:42:30.067715 kernel: Asymmetric key parser 'x509' registered Sep 13 00:42:30.067721 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:42:30.067728 kernel: io scheduler mq-deadline registered Sep 13 00:42:30.067735 kernel: io scheduler kyber registered Sep 13 00:42:30.067742 kernel: io scheduler bfq registered Sep 13 00:42:30.067749 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:42:30.067758 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:42:30.067765 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:42:30.067772 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:42:30.067779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:42:30.067786 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:42:30.067793 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:42:30.067800 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:42:30.067807 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:42:30.067906 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:42:30.067920 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 00:42:30.067987 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:42:30.068055 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:42:29 UTC (1757724149) Sep 13 00:42:30.068122 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:42:30.068140 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:42:30.068166 kernel: Segment Routing with IPv6 Sep 13 00:42:30.068179 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:42:30.068186 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:42:30.068196 kernel: Key type dns_resolver registered Sep 13 00:42:30.068203 kernel: IPI shorthand broadcast: enabled Sep 13 00:42:30.068210 kernel: sched_clock: Marking stable (596525309, 105928171)->(721799875, -19346395) Sep 13 00:42:30.068217 kernel: registered taskstats version 1 Sep 13 00:42:30.068224 kernel: Loading compiled-in X.509 certificates Sep 13 00:42:30.068231 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:42:30.068238 kernel: Key type .fscrypt registered Sep 13 00:42:30.068245 kernel: Key type fscrypt-provisioning registered Sep 13 00:42:30.068252 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:42:30.068261 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:42:30.068268 kernel: ima: No architecture policies found Sep 13 00:42:30.068274 kernel: clk: Disabling unused clocks Sep 13 00:42:30.068282 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:42:30.068288 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:42:30.068295 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:42:30.068302 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:42:30.068309 kernel: Run /init as init process Sep 13 00:42:30.068317 kernel: with arguments: Sep 13 00:42:30.068324 kernel: /init Sep 13 00:42:30.068331 kernel: with environment: Sep 13 00:42:30.068338 kernel: HOME=/ Sep 13 00:42:30.068344 kernel: TERM=linux Sep 13 00:42:30.068351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:42:30.068361 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:42:30.068370 systemd[1]: Detected virtualization kvm. Sep 13 00:42:30.068379 systemd[1]: Detected architecture x86-64. Sep 13 00:42:30.068386 systemd[1]: Running in initrd. Sep 13 00:42:30.068393 systemd[1]: No hostname configured, using default hostname. Sep 13 00:42:30.068400 systemd[1]: Hostname set to . Sep 13 00:42:30.068414 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:42:30.068422 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:42:30.068429 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:42:30.068436 systemd[1]: Reached target cryptsetup.target. Sep 13 00:42:30.068443 systemd[1]: Reached target paths.target. Sep 13 00:42:30.068453 systemd[1]: Reached target slices.target. Sep 13 00:42:30.068467 systemd[1]: Reached target swap.target. Sep 13 00:42:30.068476 systemd[1]: Reached target timers.target. Sep 13 00:42:30.068484 systemd[1]: Listening on iscsid.socket. Sep 13 00:42:30.068491 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:42:30.068500 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:42:30.068508 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:42:30.068516 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:42:30.068523 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:42:30.068531 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:42:30.068538 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:42:30.068546 systemd[1]: Reached target sockets.target. Sep 13 00:42:30.068554 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:42:30.068561 systemd[1]: Finished network-cleanup.service. Sep 13 00:42:30.068570 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:42:30.068578 systemd[1]: Starting systemd-journald.service... Sep 13 00:42:30.068586 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:42:30.068593 systemd[1]: Starting systemd-resolved.service... Sep 13 00:42:30.068601 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:42:30.068609 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:42:30.068616 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:42:30.068624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:42:30.068631 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:42:30.068643 systemd-journald[199]: Journal started Sep 13 00:42:30.068696 systemd-journald[199]: Runtime Journal (/run/log/journal/367c0f4cc4b043cd8c7101cce0071f84) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:42:30.063500 systemd-modules-load[200]: Inserted module 'overlay' Sep 13 00:42:30.141152 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:42:30.141180 kernel: Bridge firewalling registered Sep 13 00:42:30.074271 systemd-resolved[201]: Positive Trust Anchors: Sep 13 00:42:30.145272 systemd[1]: Started systemd-journald.service. Sep 13 00:42:30.145303 kernel: audit: type=1130 audit(1757724150.140:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.074283 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:30.074310 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:42:30.076486 systemd-resolved[201]: Defaulting to hostname 'linux'. Sep 13 00:42:30.140229 systemd-modules-load[200]: Inserted module 'br_netfilter' Sep 13 00:42:30.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.223806 systemd[1]: Started systemd-resolved.service. Sep 13 00:42:30.228017 kernel: audit: type=1130 audit(1757724150.223:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.228251 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:42:30.233040 kernel: audit: type=1130 audit(1757724150.227:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.233063 kernel: SCSI subsystem initialized Sep 13 00:42:30.233081 kernel: audit: type=1130 audit(1757724150.232:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.233117 systemd[1]: Reached target nss-lookup.target. Sep 13 00:42:30.237974 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:42:30.242968 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:42:30.242999 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:42:30.244185 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:42:30.246919 systemd-modules-load[200]: Inserted module 'dm_multipath' Sep 13 00:42:30.248484 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:42:30.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.250714 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:42:30.324801 kernel: audit: type=1130 audit(1757724150.249:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.325405 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:42:30.329774 kernel: audit: type=1130 audit(1757724150.325:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.329849 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:42:30.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.332015 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:42:30.335375 kernel: audit: type=1130 audit(1757724150.330:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.340740 dracut-cmdline[222]: dracut-dracut-053 Sep 13 00:42:30.342812 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:42:30.442723 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:42:30.459711 kernel: iscsi: registered transport (tcp) Sep 13 00:42:30.515734 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:42:30.515778 kernel: QLogic iSCSI HBA Driver Sep 13 00:42:30.553977 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:42:30.558338 kernel: audit: type=1130 audit(1757724150.553:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:30.558322 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:42:30.609726 kernel: raid6: avx2x4 gen() 30112 MB/s Sep 13 00:42:30.626709 kernel: raid6: avx2x4 xor() 7583 MB/s Sep 13 00:42:30.643709 kernel: raid6: avx2x2 gen() 32251 MB/s Sep 13 00:42:30.660705 kernel: raid6: avx2x2 xor() 19288 MB/s Sep 13 00:42:30.741711 kernel: raid6: avx2x1 gen() 25968 MB/s Sep 13 00:42:30.758724 kernel: raid6: avx2x1 xor() 13464 MB/s Sep 13 00:42:30.775710 kernel: raid6: sse2x4 gen() 13441 MB/s Sep 13 00:42:30.858736 kernel: raid6: sse2x4 xor() 6371 MB/s Sep 13 00:42:30.875709 kernel: raid6: sse2x2 gen() 15979 MB/s Sep 13 00:42:30.951717 kernel: raid6: sse2x2 xor() 9842 MB/s Sep 13 00:42:30.968699 kernel: raid6: sse2x1 gen() 12136 MB/s Sep 13 00:42:30.986024 kernel: raid6: sse2x1 xor() 7818 MB/s Sep 13 00:42:30.986046 kernel: raid6: using algorithm avx2x2 gen() 32251 MB/s Sep 13 00:42:30.986057 kernel: raid6: .... xor() 19288 MB/s, rmw enabled Sep 13 00:42:30.986712 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:42:30.998702 kernel: xor: automatically using best checksumming function avx Sep 13 00:42:31.087710 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:42:31.094615 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:42:31.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:31.098000 audit: BPF prog-id=7 op=LOAD Sep 13 00:42:31.098000 audit: BPF prog-id=8 op=LOAD Sep 13 00:42:31.099600 systemd[1]: Starting systemd-udevd.service... Sep 13 00:42:31.100983 kernel: audit: type=1130 audit(1757724151.095:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:31.111503 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 13 00:42:31.132802 systemd[1]: Started systemd-udevd.service. Sep 13 00:42:31.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:31.133990 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:42:31.144668 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Sep 13 00:42:31.168725 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:42:31.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:31.190858 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:42:31.226817 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:42:31.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:31.261078 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:42:31.268208 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:42:31.268226 kernel: GPT:9289727 != 19775487 Sep 13 00:42:31.268240 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:42:31.268250 kernel: GPT:9289727 != 19775487 Sep 13 00:42:31.268258 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:42:31.268266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:31.268276 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:42:31.281962 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:42:31.282015 kernel: AES CTR mode by8 optimization enabled Sep 13 00:42:31.288700 kernel: libata version 3.00 loaded. Sep 13 00:42:31.296716 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (450) Sep 13 00:42:31.301698 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:42:31.308641 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:42:31.308656 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:42:31.308763 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:42:31.308843 kernel: scsi host0: ahci Sep 13 00:42:31.308933 kernel: scsi host1: ahci Sep 13 00:42:31.309018 kernel: scsi host2: ahci Sep 13 00:42:31.309122 kernel: scsi host3: ahci Sep 13 00:42:31.309220 kernel: scsi host4: ahci Sep 13 00:42:31.309304 kernel: scsi host5: ahci Sep 13 00:42:31.309404 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:42:31.309414 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:42:31.309423 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:42:31.309434 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:42:31.309443 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:42:31.309452 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:42:31.302627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:42:31.344265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:42:31.347457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:42:31.350084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:42:31.352419 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:42:31.355722 systemd[1]: Starting disk-uuid.service... Sep 13 00:42:31.650707 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:31.650740 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:31.651706 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:31.652707 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:31.652729 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:42:31.653702 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:31.654708 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:42:31.656378 kernel: ata3.00: applying bridge limits Sep 13 00:42:31.657117 kernel: ata3.00: configured for UDMA/100 Sep 13 00:42:31.657722 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:42:31.698911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:42:31.716327 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:42:31.716341 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:42:31.834976 disk-uuid[526]: Primary Header is updated. Sep 13 00:42:31.834976 disk-uuid[526]: Secondary Entries is updated. Sep 13 00:42:31.834976 disk-uuid[526]: Secondary Header is updated. Sep 13 00:42:31.839715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:31.873722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:32.877461 disk-uuid[543]: The operation has completed successfully. Sep 13 00:42:32.878811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:42:32.903759 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:42:32.903862 systemd[1]: Finished disk-uuid.service. Sep 13 00:42:32.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:32.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:32.916492 systemd[1]: Starting verity-setup.service... Sep 13 00:42:32.931751 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:42:32.955336 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:42:32.957241 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:42:32.959577 systemd[1]: Finished verity-setup.service. Sep 13 00:42:32.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.050711 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:42:33.050926 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:42:33.051827 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:42:33.052466 systemd[1]: Starting ignition-setup.service... Sep 13 00:42:33.067699 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:42:33.075288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:33.075351 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:42:33.075365 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:42:33.083730 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:42:33.140477 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:42:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.141000 audit: BPF prog-id=9 op=LOAD Sep 13 00:42:33.142872 systemd[1]: Starting systemd-networkd.service... Sep 13 00:42:33.163432 systemd-networkd[711]: lo: Link UP Sep 13 00:42:33.163444 systemd-networkd[711]: lo: Gained carrier Sep 13 00:42:33.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.163968 systemd-networkd[711]: Enumeration completed Sep 13 00:42:33.164069 systemd[1]: Started systemd-networkd.service. Sep 13 00:42:33.164277 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:33.165574 systemd[1]: Reached target network.target. Sep 13 00:42:33.166196 systemd-networkd[711]: eth0: Link UP Sep 13 00:42:33.166201 systemd-networkd[711]: eth0: Gained carrier Sep 13 00:42:33.167674 systemd[1]: Starting iscsiuio.service... Sep 13 00:42:33.186928 systemd[1]: Started iscsiuio.service. Sep 13 00:42:33.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.188865 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:42:33.189164 systemd[1]: Starting iscsid.service... Sep 13 00:42:33.192410 iscsid[716]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:42:33.192410 iscsid[716]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:42:33.192410 iscsid[716]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:42:33.192410 iscsid[716]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:42:33.192410 iscsid[716]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:42:33.192410 iscsid[716]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:42:33.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.193646 systemd[1]: Started iscsid.service. Sep 13 00:42:33.195511 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:42:33.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.205077 systemd[1]: Finished ignition-setup.service. Sep 13 00:42:33.207989 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:42:33.209076 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:42:33.210602 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:42:33.210939 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:42:33.211177 systemd[1]: Reached target remote-fs.target. Sep 13 00:42:33.212130 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:42:33.221998 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:42:33.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.323434 ignition[726]: Ignition 2.14.0 Sep 13 00:42:33.323447 ignition[726]: Stage: fetch-offline Sep 13 00:42:33.323586 ignition[726]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:33.323596 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:33.323800 ignition[726]: parsed url from cmdline: "" Sep 13 00:42:33.323803 ignition[726]: no config URL provided Sep 13 00:42:33.323808 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:42:33.323816 ignition[726]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:42:33.323866 ignition[726]: op(1): [started] loading QEMU firmware config module Sep 13 00:42:33.323871 ignition[726]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:42:33.329276 ignition[726]: op(1): [finished] loading QEMU firmware config module Sep 13 00:42:33.371206 ignition[726]: parsing config with SHA512: e1f352e883376d2399768711996376eac08c83fded8b74f23ff04fb896ccf7a84da8dd0cbbb2be7eea20ed34a4f39e51ea5f7a5d4a663d3bef017c0174f8fed8 Sep 13 00:42:33.390924 unknown[726]: fetched base config from "system" Sep 13 00:42:33.390941 unknown[726]: fetched user config from "qemu" Sep 13 00:42:33.393368 ignition[726]: fetch-offline: fetch-offline passed Sep 13 00:42:33.394390 ignition[726]: Ignition finished successfully Sep 13 00:42:33.396386 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:42:33.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.397266 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:42:33.398372 systemd[1]: Starting ignition-kargs.service... Sep 13 00:42:33.413768 ignition[739]: Ignition 2.14.0 Sep 13 00:42:33.413784 ignition[739]: Stage: kargs Sep 13 00:42:33.413910 ignition[739]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:33.413922 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:33.418663 ignition[739]: kargs: kargs passed Sep 13 00:42:33.418753 ignition[739]: Ignition finished successfully Sep 13 00:42:33.421556 systemd[1]: Finished ignition-kargs.service. Sep 13 00:42:33.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.424394 systemd[1]: Starting ignition-disks.service... Sep 13 00:42:33.433471 ignition[745]: Ignition 2.14.0 Sep 13 00:42:33.433484 ignition[745]: Stage: disks Sep 13 00:42:33.433613 ignition[745]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:33.433626 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:33.437744 ignition[745]: disks: disks passed Sep 13 00:42:33.437801 ignition[745]: Ignition finished successfully Sep 13 00:42:33.440382 systemd[1]: Finished ignition-disks.service. Sep 13 00:42:33.441154 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:42:33.441380 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:42:33.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.444181 systemd[1]: Reached target local-fs.target. Sep 13 00:42:33.444430 systemd[1]: Reached target sysinit.target. Sep 13 00:42:33.445969 systemd[1]: Reached target basic.target. Sep 13 00:42:33.448606 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:42:33.462959 systemd-fsck[753]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:42:33.474350 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:42:33.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.476110 systemd[1]: Mounting sysroot.mount... Sep 13 00:42:33.484712 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:42:33.485560 systemd[1]: Mounted sysroot.mount. Sep 13 00:42:33.485946 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:42:33.487491 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:42:33.489086 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:42:33.489116 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:42:33.489136 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:42:33.490845 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:42:33.493621 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:42:33.500402 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:42:33.504729 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:42:33.509669 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:42:33.513935 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:42:33.549066 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:42:33.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.551143 systemd[1]: Starting ignition-mount.service... Sep 13 00:42:33.553583 systemd[1]: Starting sysroot-boot.service... Sep 13 00:42:33.558708 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:42:33.573534 ignition[806]: INFO : Ignition 2.14.0 Sep 13 00:42:33.573534 ignition[806]: INFO : Stage: mount Sep 13 00:42:33.575835 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:33.575835 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:33.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:33.576363 systemd[1]: Finished sysroot-boot.service. Sep 13 00:42:33.580451 ignition[806]: INFO : mount: mount passed Sep 13 00:42:33.580451 ignition[806]: INFO : Ignition finished successfully Sep 13 00:42:33.578011 systemd[1]: Finished ignition-mount.service. Sep 13 00:42:33.998626 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:42:34.005709 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 13 00:42:34.007944 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:34.007971 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:42:34.007985 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:42:34.011787 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:42:34.014026 systemd[1]: Starting ignition-files.service... Sep 13 00:42:34.067758 ignition[835]: INFO : Ignition 2.14.0 Sep 13 00:42:34.067758 ignition[835]: INFO : Stage: files Sep 13 00:42:34.069550 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:34.069550 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:34.069550 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:42:34.073416 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:42:34.073416 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:42:34.076323 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:42:34.077968 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:42:34.080169 unknown[835]: wrote ssh authorized keys file for user: core Sep 13 00:42:34.081345 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:42:34.082817 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:42:34.082817 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:42:34.127930 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:42:34.350353 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:42:34.353015 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:42:34.353015 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:42:34.466845 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:42:34.574490 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:42:34.574490 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:42:34.578408 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:42:34.608974 systemd-networkd[711]: eth0: Gained IPv6LL Sep 13 00:42:34.949826 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:42:35.553825 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:42:35.553825 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:42:35.558036 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:35.574716 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:35.574716 ignition[835]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:42:35.574716 ignition[835]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:42:35.601892 ignition[835]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:42:35.603527 ignition[835]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:42:35.605144 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:35.607017 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:35.608747 ignition[835]: INFO : files: files passed Sep 13 00:42:35.609613 ignition[835]: INFO : Ignition finished successfully Sep 13 00:42:35.611813 systemd[1]: Finished ignition-files.service. Sep 13 00:42:35.618004 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:42:35.618041 kernel: audit: type=1130 audit(1757724155.611:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.613465 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:42:35.617878 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:42:35.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.622818 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:42:35.628081 kernel: audit: type=1130 audit(1757724155.622:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.618839 systemd[1]: Starting ignition-quench.service... Sep 13 00:42:35.635408 kernel: audit: type=1130 audit(1757724155.627:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.635535 kernel: audit: type=1131 audit(1757724155.627:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.635649 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:35.620232 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:42:35.622913 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:42:35.622983 systemd[1]: Finished ignition-quench.service. Sep 13 00:42:35.628194 systemd[1]: Reached target ignition-complete.target. Sep 13 00:42:35.636260 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:42:35.650878 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:42:35.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.650964 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:42:35.660746 kernel: audit: type=1130 audit(1757724155.652:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.660768 kernel: audit: type=1131 audit(1757724155.652:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.652911 systemd[1]: Reached target initrd-fs.target. Sep 13 00:42:35.659273 systemd[1]: Reached target initrd.target. Sep 13 00:42:35.660775 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:42:35.661443 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:42:35.675080 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:42:35.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.677767 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:42:35.681411 kernel: audit: type=1130 audit(1757724155.676:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.687387 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:42:35.689043 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:42:35.690815 systemd[1]: Stopped target timers.target. Sep 13 00:42:35.692322 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:42:35.693334 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:42:35.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.695082 systemd[1]: Stopped target initrd.target. Sep 13 00:42:35.699235 kernel: audit: type=1131 audit(1757724155.694:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.699318 systemd[1]: Stopped target basic.target. Sep 13 00:42:35.700948 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:42:35.702877 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:42:35.704784 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:42:35.706732 systemd[1]: Stopped target remote-fs.target. Sep 13 00:42:35.708451 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:42:35.710271 systemd[1]: Stopped target sysinit.target. Sep 13 00:42:35.711936 systemd[1]: Stopped target local-fs.target. Sep 13 00:42:35.713630 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:42:35.715433 systemd[1]: Stopped target swap.target. Sep 13 00:42:35.717011 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:42:35.718158 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:42:35.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.720001 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:42:35.724441 kernel: audit: type=1131 audit(1757724155.719:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.724319 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:42:35.724431 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:42:35.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.727187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:42:35.731026 kernel: audit: type=1131 audit(1757724155.727:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.727316 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:42:35.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.732922 systemd[1]: Stopped target paths.target. Sep 13 00:42:35.734705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:42:35.740736 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:42:35.742678 systemd[1]: Stopped target slices.target. Sep 13 00:42:35.744346 systemd[1]: Stopped target sockets.target. Sep 13 00:42:35.746023 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:42:35.746983 systemd[1]: Closed iscsid.socket. Sep 13 00:42:35.748456 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:42:35.749413 systemd[1]: Closed iscsiuio.socket. Sep 13 00:42:35.750978 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:42:35.752303 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:42:35.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.754713 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:42:35.755842 systemd[1]: Stopped ignition-files.service. Sep 13 00:42:35.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.758524 systemd[1]: Stopping ignition-mount.service... Sep 13 00:42:35.760107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:42:35.761260 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:42:35.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.764128 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:42:35.765758 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:42:35.767101 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:42:35.769447 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:42:35.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.770777 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:42:35.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.777641 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:42:35.777759 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:42:35.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.780850 ignition[875]: INFO : Ignition 2.14.0 Sep 13 00:42:35.780850 ignition[875]: INFO : Stage: umount Sep 13 00:42:35.782733 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:35.782733 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:42:35.782733 ignition[875]: INFO : umount: umount passed Sep 13 00:42:35.782733 ignition[875]: INFO : Ignition finished successfully Sep 13 00:42:35.787853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:42:35.789202 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:42:35.790155 systemd[1]: Stopped ignition-mount.service. Sep 13 00:42:35.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.791819 systemd[1]: Stopped target network.target. Sep 13 00:42:35.793475 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:42:35.793525 systemd[1]: Stopped ignition-disks.service. Sep 13 00:42:35.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.795828 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:42:35.795862 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:42:35.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.797502 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:42:35.797537 systemd[1]: Stopped ignition-setup.service. Sep 13 00:42:35.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.800653 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:42:35.802275 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:42:35.811741 systemd-networkd[711]: eth0: DHCPv6 lease lost Sep 13 00:42:35.813155 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:42:35.813272 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:42:35.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.815530 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:42:35.815570 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:42:35.817482 systemd[1]: Stopping network-cleanup.service... Sep 13 00:42:35.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.819479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:42:35.821000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:42:35.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.819547 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:42:35.821426 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:42:35.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.821471 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:42:35.824363 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:42:35.824415 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:42:35.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.826349 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:42:35.828054 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:42:35.828533 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:42:35.828656 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:42:35.835000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:42:35.837837 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:42:35.839026 systemd[1]: Stopped network-cleanup.service. Sep 13 00:42:35.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.841467 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:42:35.842740 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:42:35.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.845030 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:42:35.845080 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:42:35.847953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:42:35.848005 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:42:35.850836 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:42:35.850884 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:42:35.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.853726 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:42:35.853789 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:42:35.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.855537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:35.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.855578 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:42:35.858219 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:42:35.858567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:35.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.858623 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:42:35.864744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:42:35.864829 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:42:35.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.898009 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:42:35.898111 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:42:35.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.900101 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:42:35.901061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:42:35.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:35.901112 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:42:35.903488 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:42:35.919099 systemd[1]: Switching root. Sep 13 00:42:35.938050 iscsid[716]: iscsid shutting down. Sep 13 00:42:35.938917 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). Sep 13 00:42:35.938947 systemd-journald[199]: Journal stopped Sep 13 00:42:39.601086 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:42:39.601140 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:42:39.601152 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:42:39.601162 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:42:39.601250 kernel: SELinux: policy capability open_perms=1 Sep 13 00:42:39.601263 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:42:39.601278 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:42:39.601296 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:42:39.601305 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:42:39.601318 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:42:39.601328 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:42:39.601341 systemd[1]: Successfully loaded SELinux policy in 40.273ms. Sep 13 00:42:39.601363 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.562ms. Sep 13 00:42:39.601375 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:42:39.601389 systemd[1]: Detected virtualization kvm. Sep 13 00:42:39.601399 systemd[1]: Detected architecture x86-64. Sep 13 00:42:39.601409 systemd[1]: Detected first boot. Sep 13 00:42:39.601420 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:42:39.601430 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:42:39.601440 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:42:39.601458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:39.601469 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:39.601480 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:39.601498 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:42:39.601508 systemd[1]: Stopped iscsiuio.service. Sep 13 00:42:39.601519 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:42:39.601536 systemd[1]: Stopped iscsid.service. Sep 13 00:42:39.601547 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:42:39.601557 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:42:39.601567 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:42:39.601579 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:42:39.601589 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:42:39.601599 systemd[1]: Created slice system-getty.slice. Sep 13 00:42:39.601617 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:42:39.601628 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:42:39.601638 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:42:39.601648 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:42:39.601658 systemd[1]: Created slice user.slice. Sep 13 00:42:39.601671 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:42:39.601694 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:42:39.601705 systemd[1]: Set up automount boot.automount. Sep 13 00:42:39.601727 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:42:39.601737 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:42:39.601748 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:42:39.601758 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:42:39.601768 systemd[1]: Reached target integritysetup.target. Sep 13 00:42:39.601778 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:42:39.601788 systemd[1]: Reached target remote-fs.target. Sep 13 00:42:39.601798 systemd[1]: Reached target slices.target. Sep 13 00:42:39.601808 systemd[1]: Reached target swap.target. Sep 13 00:42:39.601818 systemd[1]: Reached target torcx.target. Sep 13 00:42:39.601837 systemd[1]: Reached target veritysetup.target. Sep 13 00:42:39.601848 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:42:39.601858 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:42:39.601868 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:42:39.601878 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:42:39.601889 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:42:39.601907 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:42:39.601920 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:42:39.601931 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:42:39.601948 systemd[1]: Mounting media.mount... Sep 13 00:42:39.601958 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:39.601969 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:42:39.601980 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:42:39.601991 systemd[1]: Mounting tmp.mount... Sep 13 00:42:39.602001 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:42:39.602013 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:39.602023 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:42:39.602034 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:42:39.602051 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:39.602061 systemd[1]: Starting modprobe@drm.service... Sep 13 00:42:39.602071 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:39.602086 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:42:39.602096 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:39.602106 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:42:39.602117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:42:39.602127 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:42:39.602138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:42:39.602156 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:42:39.602166 systemd[1]: Stopped systemd-journald.service. Sep 13 00:42:39.602176 kernel: loop: module loaded Sep 13 00:42:39.602189 kernel: fuse: init (API version 7.34) Sep 13 00:42:39.602199 systemd[1]: Starting systemd-journald.service... Sep 13 00:42:39.602210 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:42:39.602223 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:42:39.602233 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:42:39.602244 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:42:39.602261 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:42:39.602271 systemd[1]: Stopped verity-setup.service. Sep 13 00:42:39.602282 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:39.602292 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:42:39.602305 systemd-journald[985]: Journal started Sep 13 00:42:39.602350 systemd-journald[985]: Runtime Journal (/run/log/journal/367c0f4cc4b043cd8c7101cce0071f84) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:42:36.002000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:42:36.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:42:36.276000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:42:36.276000 audit: BPF prog-id=10 op=LOAD Sep 13 00:42:36.276000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:42:36.276000 audit: BPF prog-id=11 op=LOAD Sep 13 00:42:36.276000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:42:36.375000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:42:36.375000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e4 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:36.375000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:42:36.377000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:42:36.377000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859c9 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:36.377000 audit: CWD cwd="/" Sep 13 00:42:36.377000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:36.377000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:36.377000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:42:39.380000 audit: BPF prog-id=12 op=LOAD Sep 13 00:42:39.380000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:42:39.380000 audit: BPF prog-id=13 op=LOAD Sep 13 00:42:39.380000 audit: BPF prog-id=14 op=LOAD Sep 13 00:42:39.380000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:42:39.380000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:42:39.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.392000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:42:39.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.500000 audit: BPF prog-id=15 op=LOAD Sep 13 00:42:39.500000 audit: BPF prog-id=16 op=LOAD Sep 13 00:42:39.500000 audit: BPF prog-id=17 op=LOAD Sep 13 00:42:39.500000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:42:39.500000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:42:39.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.599000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:42:39.599000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe37ce1e00 a2=4000 a3=7ffe37ce1e9c items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:39.599000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:42:39.378616 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:42:39.605227 systemd[1]: Started systemd-journald.service. Sep 13 00:42:36.374132 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:39.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.378629 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:42:36.374434 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:42:39.382627 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:42:36.374451 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:42:39.605383 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:42:36.374496 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:42:39.606217 systemd[1]: Mounted media.mount. Sep 13 00:42:36.374505 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:42:39.606971 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:42:36.374551 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:42:39.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.607858 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:42:36.374562 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:42:39.608782 systemd[1]: Mounted tmp.mount. Sep 13 00:42:36.374871 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:42:39.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.609707 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:42:36.374935 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:42:39.610848 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:42:36.374950 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:42:36.375695 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:42:39.611968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:42:39.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:36.375744 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:42:39.612100 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:42:36.375762 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:42:36.375776 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:42:39.613227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:36.375791 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:42:39.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.613349 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:36.375804 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:42:39.045534 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:42:39.614447 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:42:39.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.045854 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:42:39.614586 systemd[1]: Finished modprobe@drm.service. Sep 13 00:42:39.045991 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:42:39.046174 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:42:39.046224 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:42:39.046294 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-13T00:42:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:42:39.615658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:39.615820 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:39.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.617128 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:42:39.617293 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:42:39.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.618308 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:39.618435 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:39.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.619490 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:42:39.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.620581 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:42:39.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.621770 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:42:39.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.623169 systemd[1]: Reached target network-pre.target. Sep 13 00:42:39.625096 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:42:39.627073 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:42:39.627915 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:42:39.629648 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:42:39.631658 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:42:39.632602 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:39.633960 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:42:39.635032 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:39.636380 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:42:39.639111 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:42:39.643419 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:42:39.644502 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:42:39.646172 systemd-journald[985]: Time spent on flushing to /var/log/journal/367c0f4cc4b043cd8c7101cce0071f84 is 13.422ms for 1094 entries. Sep 13 00:42:39.646172 systemd-journald[985]: System Journal (/var/log/journal/367c0f4cc4b043cd8c7101cce0071f84) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:42:40.116526 systemd-journald[985]: Received client request to flush runtime journal. Sep 13 00:42:39.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:39.649961 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:42:39.652165 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:42:40.117225 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:42:39.673825 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:42:39.675156 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:42:39.890765 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:42:39.891800 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:42:40.117807 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:42:40.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.492290 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:42:40.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.493000 audit: BPF prog-id=18 op=LOAD Sep 13 00:42:40.493000 audit: BPF prog-id=19 op=LOAD Sep 13 00:42:40.493000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:42:40.493000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:42:40.494773 systemd[1]: Starting systemd-udevd.service... Sep 13 00:42:40.510464 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Sep 13 00:42:40.523058 systemd[1]: Started systemd-udevd.service. Sep 13 00:42:40.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.524000 audit: BPF prog-id=20 op=LOAD Sep 13 00:42:40.525670 systemd[1]: Starting systemd-networkd.service... Sep 13 00:42:40.534038 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:42:40.532000 audit: BPF prog-id=21 op=LOAD Sep 13 00:42:40.532000 audit: BPF prog-id=22 op=LOAD Sep 13 00:42:40.532000 audit: BPF prog-id=23 op=LOAD Sep 13 00:42:40.560948 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:42:40.571667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:42:40.572927 systemd[1]: Started systemd-userdbd.service. Sep 13 00:42:40.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.596712 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:42:40.602731 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:42:40.612000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:42:40.617811 kernel: kauditd_printk_skb: 104 callbacks suppressed Sep 13 00:42:40.617911 kernel: audit: type=1400 audit(1757724160.612:139): avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:42:40.618086 systemd-networkd[1023]: lo: Link UP Sep 13 00:42:40.618537 systemd-networkd[1023]: lo: Gained carrier Sep 13 00:42:40.619021 systemd-networkd[1023]: Enumeration completed Sep 13 00:42:40.619169 systemd[1]: Started systemd-networkd.service. Sep 13 00:42:40.619443 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:40.620584 systemd-networkd[1023]: eth0: Link UP Sep 13 00:42:40.620664 systemd-networkd[1023]: eth0: Gained carrier Sep 13 00:42:40.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.627724 kernel: audit: type=1130 audit(1757724160.622:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.650722 kernel: audit: type=1300 audit(1757724160.612:139): arch=c000003e syscall=175 success=yes exit=0 a0=5640d6535a80 a1=338ec a2=7f2fe09a2bc5 a3=5 items=110 ppid=1014 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.650897 kernel: audit: type=1307 audit(1757724160.612:139): cwd="/" Sep 13 00:42:40.650954 kernel: audit: type=1302 audit(1757724160.612:139): item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.650983 kernel: audit: type=1302 audit(1757724160.612:139): item=1 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.651006 kernel: audit: type=1302 audit(1757724160.612:139): item=2 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.651036 kernel: audit: type=1302 audit(1757724160.612:139): item=3 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.651057 kernel: audit: type=1302 audit(1757724160.612:139): item=4 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5640d6535a80 a1=338ec a2=7f2fe09a2bc5 a3=5 items=110 ppid=1014 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:40.612000 audit: CWD cwd="/" Sep 13 00:42:40.612000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=1 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=2 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=3 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=4 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=5 name=(null) inode=15417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=6 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=7 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=8 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=9 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=10 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=11 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.655769 kernel: audit: type=1302 audit(1757724160.612:139): item=5 name=(null) inode=15417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=12 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=13 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=14 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=15 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=16 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=17 name=(null) inode=15423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=18 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=19 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=20 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=21 name=(null) inode=15425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=22 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=23 name=(null) inode=15426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=24 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=25 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=26 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=27 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=28 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=29 name=(null) inode=15429 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=30 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=31 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=32 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=33 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=34 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=35 name=(null) inode=15432 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=36 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=37 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=38 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=39 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=40 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=41 name=(null) inode=15435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=42 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=43 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=44 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=45 name=(null) inode=15437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=46 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=47 name=(null) inode=15438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=48 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=49 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=50 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=51 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=52 name=(null) inode=15436 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=53 name=(null) inode=15441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=55 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=56 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=57 name=(null) inode=15443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=58 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=59 name=(null) inode=15444 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=60 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=61 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=62 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=63 name=(null) inode=15446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=64 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=65 name=(null) inode=15447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=66 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.655939 systemd-networkd[1023]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:42:40.612000 audit: PATH item=67 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=68 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=69 name=(null) inode=15449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=70 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=71 name=(null) inode=15450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=72 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=73 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=74 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=75 name=(null) inode=15452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=76 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=77 name=(null) inode=15453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=78 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=79 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=80 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=81 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=82 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=83 name=(null) inode=15456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=84 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=85 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=86 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=87 name=(null) inode=15458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=88 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=89 name=(null) inode=15459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=90 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=91 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=92 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=93 name=(null) inode=15461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=94 name=(null) inode=15457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=95 name=(null) inode=15462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=96 name=(null) inode=15442 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=97 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=98 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=99 name=(null) inode=15464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=100 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=101 name=(null) inode=15465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=102 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=103 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=104 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=105 name=(null) inode=15467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=106 name=(null) inode=15463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=107 name=(null) inode=15468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PATH item=109 name=(null) inode=15469 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:42:40.612000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:42:40.661762 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:42:40.662107 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:42:40.662302 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:42:40.682721 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:42:40.684707 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:42:40.733711 kernel: kvm: Nested Virtualization enabled Sep 13 00:42:40.733826 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:42:40.735172 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:42:40.735208 kernel: SVM: Virtual GIF supported Sep 13 00:42:40.753722 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:42:40.778276 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:42:40.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.780848 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:42:40.793093 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:42:40.820989 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:42:40.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.822102 systemd[1]: Reached target cryptsetup.target. Sep 13 00:42:40.824127 systemd[1]: Starting lvm2-activation.service... Sep 13 00:42:40.828055 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:42:40.857633 systemd[1]: Finished lvm2-activation.service. Sep 13 00:42:40.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.858745 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:42:40.859618 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:42:40.859647 systemd[1]: Reached target local-fs.target. Sep 13 00:42:40.860476 systemd[1]: Reached target machines.target. Sep 13 00:42:40.862520 systemd[1]: Starting ldconfig.service... Sep 13 00:42:40.863602 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:40.863672 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:40.864799 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:42:40.866496 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:42:40.868958 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:42:40.870819 systemd[1]: Starting systemd-sysext.service... Sep 13 00:42:40.871982 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Sep 13 00:42:40.873025 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:42:40.879259 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:42:40.880813 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:42:40.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.888972 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:42:40.889233 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:42:40.907726 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 00:42:40.926204 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Sep 13 00:42:40.926204 systemd-fsck[1060]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:42:40.927798 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:42:40.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:40.930764 systemd[1]: Mounting boot.mount... Sep 13 00:42:41.108996 systemd[1]: Mounted boot.mount. Sep 13 00:42:41.113701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:42:41.251276 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:42:41.252100 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:42:41.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.258714 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 00:42:41.262493 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:42:41.265024 (sd-sysext)[1065]: Using extensions 'kubernetes'. Sep 13 00:42:41.265359 (sd-sysext)[1065]: Merged extensions into '/usr'. Sep 13 00:42:41.267309 systemd[1]: Finished ldconfig.service. Sep 13 00:42:41.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.280323 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:41.282510 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:42:41.283477 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.284529 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:41.286385 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:41.288204 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:41.289048 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.289148 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:41.289245 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:41.291661 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:42:41.292741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:41.292850 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:41.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.294279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:41.294396 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:41.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.295647 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:41.295778 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:41.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.297013 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:42:41.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.298246 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:41.298337 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.299184 systemd[1]: Finished systemd-sysext.service. Sep 13 00:42:41.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.301108 systemd[1]: Starting ensure-sysext.service... Sep 13 00:42:41.302784 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:42:41.307100 systemd[1]: Reloading. Sep 13 00:42:41.313508 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:42:41.315398 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:42:41.318043 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:42:41.360272 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-09-13T00:42:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:42:41.360305 /usr/lib/systemd/system-generators/torcx-generator[1091]: time="2025-09-13T00:42:41Z" level=info msg="torcx already run" Sep 13 00:42:41.426606 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:42:41.426627 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:42:41.443543 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:42:41.492000 audit: BPF prog-id=24 op=LOAD Sep 13 00:42:41.493000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:42:41.493000 audit: BPF prog-id=25 op=LOAD Sep 13 00:42:41.493000 audit: BPF prog-id=26 op=LOAD Sep 13 00:42:41.493000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:42:41.493000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:42:41.494000 audit: BPF prog-id=27 op=LOAD Sep 13 00:42:41.494000 audit: BPF prog-id=28 op=LOAD Sep 13 00:42:41.494000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:42:41.494000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:42:41.495000 audit: BPF prog-id=29 op=LOAD Sep 13 00:42:41.495000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:42:41.495000 audit: BPF prog-id=30 op=LOAD Sep 13 00:42:41.495000 audit: BPF prog-id=31 op=LOAD Sep 13 00:42:41.495000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:42:41.495000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:42:41.497000 audit: BPF prog-id=32 op=LOAD Sep 13 00:42:41.497000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:42:41.500139 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:42:41.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.504557 systemd[1]: Starting audit-rules.service... Sep 13 00:42:41.506436 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:42:41.508290 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:42:41.509000 audit: BPF prog-id=33 op=LOAD Sep 13 00:42:41.510621 systemd[1]: Starting systemd-resolved.service... Sep 13 00:42:41.511000 audit: BPF prog-id=34 op=LOAD Sep 13 00:42:41.512870 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:42:41.514722 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:42:41.516038 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:42:41.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.519000 audit[1140]: SYSTEM_BOOT pid=1140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.522954 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.524303 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:41.526281 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:41.528063 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:41.528888 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.529052 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:41.529210 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:41.530732 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:42:41.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.532306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:41.532416 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:41.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.533846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:41.533993 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:41.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.535359 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:41.535455 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:41.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.538010 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:42:41.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.540018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.541107 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:41.542784 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:41.544592 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:41.545409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.545510 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:41.546606 systemd[1]: Starting systemd-update-done.service... Sep 13 00:42:41.547525 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:41.548406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:41.548518 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:41.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.549812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:41.549933 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:41.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.551282 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:41.551396 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:41.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:42:41.552587 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:41.552712 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.555189 augenrules[1162]: No rules Sep 13 00:42:41.555396 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.554000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:42:41.554000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9b6ee850 a2=420 a3=0 items=0 ppid=1134 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:42:41.554000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:42:41.558353 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:42:41.560320 systemd[1]: Starting modprobe@drm.service... Sep 13 00:42:41.562194 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:42:41.564104 systemd[1]: Starting modprobe@loop.service... Sep 13 00:42:41.564965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.565059 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:41.566208 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:42:41.567352 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:42:41.568592 systemd[1]: Finished audit-rules.service. Sep 13 00:42:41.569764 systemd[1]: Finished systemd-update-done.service. Sep 13 00:42:41.571071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:42:41.571171 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:42:41.572352 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:42:41.572471 systemd[1]: Finished modprobe@drm.service. Sep 13 00:42:41.573634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:42:41.573762 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:42:41.575195 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:42:41.575309 systemd[1]: Finished modprobe@loop.service. Sep 13 00:42:41.576720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:42:41.576830 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:42:41.577822 systemd[1]: Finished ensure-sysext.service. Sep 13 00:42:41.589963 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:42:41.590973 systemd[1]: Reached target time-set.target. Sep 13 00:42:42.202563 systemd-timesyncd[1139]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:42:42.202622 systemd-timesyncd[1139]: Initial clock synchronization to Sat 2025-09-13 00:42:42.202482 UTC. Sep 13 00:42:42.219941 systemd-resolved[1137]: Positive Trust Anchors: Sep 13 00:42:42.219950 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:42.219976 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:42:42.229609 systemd-resolved[1137]: Defaulting to hostname 'linux'. Sep 13 00:42:42.231251 systemd[1]: Started systemd-resolved.service. Sep 13 00:42:42.232225 systemd[1]: Reached target network.target. Sep 13 00:42:42.232989 systemd[1]: Reached target nss-lookup.target. Sep 13 00:42:42.233792 systemd[1]: Reached target sysinit.target. Sep 13 00:42:42.234619 systemd[1]: Started motdgen.path. Sep 13 00:42:42.235319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:42:42.236524 systemd[1]: Started logrotate.timer. Sep 13 00:42:42.237290 systemd[1]: Started mdadm.timer. Sep 13 00:42:42.237950 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:42:42.238896 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:42:42.238924 systemd[1]: Reached target paths.target. Sep 13 00:42:42.239656 systemd[1]: Reached target timers.target. Sep 13 00:42:42.240714 systemd[1]: Listening on dbus.socket. Sep 13 00:42:42.242359 systemd[1]: Starting docker.socket... Sep 13 00:42:42.245028 systemd[1]: Listening on sshd.socket. Sep 13 00:42:42.245822 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:42.246148 systemd[1]: Listening on docker.socket. Sep 13 00:42:42.246912 systemd[1]: Reached target sockets.target. Sep 13 00:42:42.247678 systemd[1]: Reached target basic.target. Sep 13 00:42:42.248408 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:42:42.248428 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:42:42.249178 systemd[1]: Starting containerd.service... Sep 13 00:42:42.250707 systemd[1]: Starting dbus.service... Sep 13 00:42:42.253531 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:42:42.255604 systemd[1]: Starting extend-filesystems.service... Sep 13 00:42:42.256615 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:42:42.257134 jq[1178]: false Sep 13 00:42:42.257893 systemd[1]: Starting motdgen.service... Sep 13 00:42:42.259896 systemd[1]: Starting prepare-helm.service... Sep 13 00:42:42.262085 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:42:42.263852 systemd[1]: Starting sshd-keygen.service... Sep 13 00:42:42.267762 systemd[1]: Starting systemd-logind.service... Sep 13 00:42:42.268679 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:42:42.268823 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:42:42.269447 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:42:42.270370 systemd[1]: Starting update-engine.service... Sep 13 00:42:42.272187 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:42:42.275145 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:42:42.279757 extend-filesystems[1179]: Found loop1 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found sr0 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda1 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda2 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda3 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found usr Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda4 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda6 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda7 Sep 13 00:42:42.279757 extend-filesystems[1179]: Found vda9 Sep 13 00:42:42.279757 extend-filesystems[1179]: Checking size of /dev/vda9 Sep 13 00:42:42.297168 jq[1193]: true Sep 13 00:42:42.275360 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:42:42.284100 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:42:42.299634 tar[1197]: linux-amd64/LICENSE Sep 13 00:42:42.299634 tar[1197]: linux-amd64/helm Sep 13 00:42:42.284283 systemd[1]: Finished motdgen.service. Sep 13 00:42:42.299995 jq[1199]: true Sep 13 00:42:42.285137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:42:42.285285 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:42:42.301858 dbus-daemon[1177]: [system] SELinux support is enabled Sep 13 00:42:42.302010 systemd[1]: Started dbus.service. Sep 13 00:42:42.305236 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:42:42.305262 systemd[1]: Reached target system-config.target. Sep 13 00:42:42.306192 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:42:42.306219 systemd[1]: Reached target user-config.target. Sep 13 00:42:42.372513 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:42:42.399145 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:42:42.399217 extend-filesystems[1179]: Resized partition /dev/vda9 Sep 13 00:42:42.400231 update_engine[1192]: I0913 00:42:42.383337 1192 main.cc:92] Flatcar Update Engine starting Sep 13 00:42:42.400231 update_engine[1192]: I0913 00:42:42.391695 1192 update_check_scheduler.cc:74] Next update check in 11m14s Sep 13 00:42:42.385598 systemd[1]: Started update-engine.service. Sep 13 00:42:42.400617 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:42:42.408991 bash[1228]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:42:42.554348 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:42:42.554348 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:42:42.554348 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:42:42.558328 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Sep 13 00:42:42.559272 env[1201]: time="2025-09-13T00:42:42.554661519Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:42:42.561238 systemd[1]: Started locksmithd.service. Sep 13 00:42:42.564253 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:42:42.564491 systemd[1]: Finished extend-filesystems.service. Sep 13 00:42:42.565763 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:42:42.574256 env[1201]: time="2025-09-13T00:42:42.574207818Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:42:42.574404 env[1201]: time="2025-09-13T00:42:42.574384249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.574331 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:42.574351 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:42:42.576135 env[1201]: time="2025-09-13T00:42:42.576101319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576135 env[1201]: time="2025-09-13T00:42:42.576128189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576336 env[1201]: time="2025-09-13T00:42:42.576310290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576336 env[1201]: time="2025-09-13T00:42:42.576329897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576421 env[1201]: time="2025-09-13T00:42:42.576341158Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:42:42.576421 env[1201]: time="2025-09-13T00:42:42.576349484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576421 env[1201]: time="2025-09-13T00:42:42.576409717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576719 env[1201]: time="2025-09-13T00:42:42.576696865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576828 env[1201]: time="2025-09-13T00:42:42.576804707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:42:42.576828 env[1201]: time="2025-09-13T00:42:42.576821759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:42:42.576903 env[1201]: time="2025-09-13T00:42:42.576867575Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:42:42.576903 env[1201]: time="2025-09-13T00:42:42.576878235Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:42:42.583718 env[1201]: time="2025-09-13T00:42:42.583675931Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:42:42.583781 env[1201]: time="2025-09-13T00:42:42.583723380Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:42:42.583781 env[1201]: time="2025-09-13T00:42:42.583736094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:42:42.583781 env[1201]: time="2025-09-13T00:42:42.583769176Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583787340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583802388Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583813649Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583825862Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583836773Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.583854 env[1201]: time="2025-09-13T00:42:42.583847813Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.584016 env[1201]: time="2025-09-13T00:42:42.583858193Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.584016 env[1201]: time="2025-09-13T00:42:42.583868282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:42:42.584016 env[1201]: time="2025-09-13T00:42:42.583955255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:42:42.584094 env[1201]: time="2025-09-13T00:42:42.584024424Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:42:42.584262 env[1201]: time="2025-09-13T00:42:42.584239548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:42:42.584310 env[1201]: time="2025-09-13T00:42:42.584268252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584310 env[1201]: time="2025-09-13T00:42:42.584279733Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:42:42.584361 env[1201]: time="2025-09-13T00:42:42.584322123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584361 env[1201]: time="2025-09-13T00:42:42.584332933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584361 env[1201]: time="2025-09-13T00:42:42.584343883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584361 env[1201]: time="2025-09-13T00:42:42.584353391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584476 env[1201]: time="2025-09-13T00:42:42.584363440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584476 env[1201]: time="2025-09-13T00:42:42.584373619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584476 env[1201]: time="2025-09-13T00:42:42.584382967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584476 env[1201]: time="2025-09-13T00:42:42.584393366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584476 env[1201]: time="2025-09-13T00:42:42.584404036Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584522929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584536715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584546714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584556452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584569687Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584578493Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584595385Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:42:42.584665 env[1201]: time="2025-09-13T00:42:42.584637694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:42:42.584896 env[1201]: time="2025-09-13T00:42:42.584824114Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:42:42.584896 env[1201]: time="2025-09-13T00:42:42.584898193Z" level=info msg="Connect containerd service" Sep 13 00:42:42.585661 env[1201]: time="2025-09-13T00:42:42.584928710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:42:42.585661 env[1201]: time="2025-09-13T00:42:42.585535327Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:42:42.585896 env[1201]: time="2025-09-13T00:42:42.585858283Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:42:42.585967 env[1201]: time="2025-09-13T00:42:42.585914939Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:42:42.586018 systemd[1]: Started containerd.service. Sep 13 00:42:42.589960 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:42:42.589995 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:42:42.590395 systemd-logind[1189]: New seat seat0. Sep 13 00:42:42.591480 env[1201]: time="2025-09-13T00:42:42.591420562Z" level=info msg="Start subscribing containerd event" Sep 13 00:42:42.591544 env[1201]: time="2025-09-13T00:42:42.591497587Z" level=info msg="Start recovering state" Sep 13 00:42:42.591581 env[1201]: time="2025-09-13T00:42:42.591569772Z" level=info msg="Start event monitor" Sep 13 00:42:42.591611 env[1201]: time="2025-09-13T00:42:42.591592815Z" level=info msg="Start snapshots syncer" Sep 13 00:42:42.591611 env[1201]: time="2025-09-13T00:42:42.591600970Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:42:42.591611 env[1201]: time="2025-09-13T00:42:42.591606882Z" level=info msg="Start streaming server" Sep 13 00:42:42.593484 systemd[1]: Started systemd-logind.service. Sep 13 00:42:42.596610 env[1201]: time="2025-09-13T00:42:42.596561812Z" level=info msg="containerd successfully booted in 0.174536s" Sep 13 00:42:42.622486 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:42:42.990904 tar[1197]: linux-amd64/README.md Sep 13 00:42:42.999344 systemd[1]: Finished prepare-helm.service. Sep 13 00:42:43.000805 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:42:43.022871 systemd[1]: Finished sshd-keygen.service. Sep 13 00:42:43.025344 systemd[1]: Starting issuegen.service... Sep 13 00:42:43.030670 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:42:43.030796 systemd[1]: Finished issuegen.service. Sep 13 00:42:43.032847 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:42:43.038261 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:42:43.040750 systemd[1]: Started getty@tty1.service. Sep 13 00:42:43.043793 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:42:43.045003 systemd[1]: Reached target getty.target. Sep 13 00:42:43.155650 systemd-networkd[1023]: eth0: Gained IPv6LL Sep 13 00:42:43.157519 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:42:43.158730 systemd[1]: Reached target network-online.target. Sep 13 00:42:43.161147 systemd[1]: Starting kubelet.service... Sep 13 00:42:44.266990 systemd[1]: Started kubelet.service. Sep 13 00:42:44.268733 systemd[1]: Reached target multi-user.target. Sep 13 00:42:44.271355 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:42:44.281997 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:42:44.282160 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:42:44.283488 systemd[1]: Startup finished in 929ms (kernel) + 6.120s (initrd) + 7.711s (userspace) = 14.761s. Sep 13 00:42:44.411583 systemd[1]: Created slice system-sshd.slice. Sep 13 00:42:44.412697 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:49440.service. Sep 13 00:42:44.448936 sshd[1267]: Accepted publickey for core from 10.0.0.1 port 49440 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:42:44.452208 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.462516 systemd-logind[1189]: New session 1 of user core. Sep 13 00:42:44.463107 systemd[1]: Created slice user-500.slice. Sep 13 00:42:44.464587 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:42:44.475166 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:42:44.477330 systemd[1]: Starting user@500.service... Sep 13 00:42:44.501740 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.584645 systemd[1270]: Queued start job for default target default.target. Sep 13 00:42:44.585161 systemd[1270]: Reached target paths.target. Sep 13 00:42:44.585190 systemd[1270]: Reached target sockets.target. Sep 13 00:42:44.585206 systemd[1270]: Reached target timers.target. Sep 13 00:42:44.585220 systemd[1270]: Reached target basic.target. Sep 13 00:42:44.585345 systemd[1]: Started user@500.service. Sep 13 00:42:44.586610 systemd[1270]: Reached target default.target. Sep 13 00:42:44.586672 systemd[1]: Started session-1.scope. Sep 13 00:42:44.586678 systemd[1270]: Startup finished in 78ms. Sep 13 00:42:44.642286 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:49444.service. Sep 13 00:42:44.683601 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 49444 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:42:44.684312 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.689205 systemd-logind[1189]: New session 2 of user core. Sep 13 00:42:44.689738 systemd[1]: Started session-2.scope. Sep 13 00:42:44.746856 sshd[1280]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:44.750134 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:49444.service: Deactivated successfully. Sep 13 00:42:44.750876 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:42:44.751584 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:42:44.752795 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:49446.service. Sep 13 00:42:44.754200 systemd-logind[1189]: Removed session 2. Sep 13 00:42:44.785201 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 49446 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:42:44.786393 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.789914 systemd-logind[1189]: New session 3 of user core. Sep 13 00:42:44.790675 systemd[1]: Started session-3.scope. Sep 13 00:42:44.843658 sshd[1286]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:44.846678 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:49446.service: Deactivated successfully. Sep 13 00:42:44.847345 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:42:44.851401 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:49454.service. Sep 13 00:42:44.852050 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:42:44.853899 systemd-logind[1189]: Removed session 3. Sep 13 00:42:44.884196 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 49454 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:42:44.885671 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.889275 systemd-logind[1189]: New session 4 of user core. Sep 13 00:42:44.890299 systemd[1]: Started session-4.scope. Sep 13 00:42:44.947413 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:44.950151 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:49454.service: Deactivated successfully. Sep 13 00:42:44.950854 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:42:44.951431 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:42:44.952623 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:49456.service. Sep 13 00:42:44.953821 systemd-logind[1189]: Removed session 4. Sep 13 00:42:44.960740 kubelet[1259]: E0913 00:42:44.960697 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:44.962389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:44.962567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:44.962892 systemd[1]: kubelet.service: Consumed 1.717s CPU time. Sep 13 00:42:44.985115 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 49456 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:42:44.986414 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:42:44.990431 systemd-logind[1189]: New session 5 of user core. Sep 13 00:42:44.991608 systemd[1]: Started session-5.scope. Sep 13 00:42:45.048767 sudo[1302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:42:45.048954 sudo[1302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:42:45.083394 systemd[1]: Starting docker.service... Sep 13 00:42:45.138037 env[1314]: time="2025-09-13T00:42:45.137881736Z" level=info msg="Starting up" Sep 13 00:42:45.140191 env[1314]: time="2025-09-13T00:42:45.140164316Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:42:45.140191 env[1314]: time="2025-09-13T00:42:45.140184043Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:42:45.140273 env[1314]: time="2025-09-13T00:42:45.140205393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:42:45.140273 env[1314]: time="2025-09-13T00:42:45.140221103Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:42:45.169398 env[1314]: time="2025-09-13T00:42:45.169357463Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:42:45.169398 env[1314]: time="2025-09-13T00:42:45.169382741Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:42:45.169573 env[1314]: time="2025-09-13T00:42:45.169401506Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:42:45.169573 env[1314]: time="2025-09-13T00:42:45.169421012Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:42:46.414065 env[1314]: time="2025-09-13T00:42:46.414003204Z" level=info msg="Loading containers: start." Sep 13 00:42:46.821509 kernel: Initializing XFRM netlink socket Sep 13 00:42:46.848700 env[1314]: time="2025-09-13T00:42:46.848642292Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:42:46.906025 systemd-networkd[1023]: docker0: Link UP Sep 13 00:42:46.925640 env[1314]: time="2025-09-13T00:42:46.925590951Z" level=info msg="Loading containers: done." Sep 13 00:42:46.940319 env[1314]: time="2025-09-13T00:42:46.940253232Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:42:46.940683 env[1314]: time="2025-09-13T00:42:46.940651369Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:42:46.940863 env[1314]: time="2025-09-13T00:42:46.940839672Z" level=info msg="Daemon has completed initialization" Sep 13 00:42:46.963000 systemd[1]: Started docker.service. Sep 13 00:42:46.967956 env[1314]: time="2025-09-13T00:42:46.967876576Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:42:47.952762 env[1201]: time="2025-09-13T00:42:47.952685613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:42:48.575445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081887250.mount: Deactivated successfully. Sep 13 00:42:50.629839 env[1201]: time="2025-09-13T00:42:50.629746962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:50.633638 env[1201]: time="2025-09-13T00:42:50.633572084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:50.636208 env[1201]: time="2025-09-13T00:42:50.636173231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:50.648097 env[1201]: time="2025-09-13T00:42:50.648063245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:50.648730 env[1201]: time="2025-09-13T00:42:50.648691974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:42:50.649597 env[1201]: time="2025-09-13T00:42:50.649506271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:42:53.645711 env[1201]: time="2025-09-13T00:42:53.645633438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:53.647706 env[1201]: time="2025-09-13T00:42:53.647639560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:53.649376 env[1201]: time="2025-09-13T00:42:53.649322505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:53.651052 env[1201]: time="2025-09-13T00:42:53.650999800Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:53.651750 env[1201]: time="2025-09-13T00:42:53.651711304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:42:53.652318 env[1201]: time="2025-09-13T00:42:53.652293306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:42:55.213829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:42:55.214089 systemd[1]: Stopped kubelet.service. Sep 13 00:42:55.214142 systemd[1]: kubelet.service: Consumed 1.717s CPU time. Sep 13 00:42:55.215863 systemd[1]: Starting kubelet.service... Sep 13 00:42:55.368733 systemd[1]: Started kubelet.service. Sep 13 00:42:56.334243 kubelet[1448]: E0913 00:42:56.334155 1448 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:42:56.337148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:42:56.337344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:42:56.337783 systemd[1]: kubelet.service: Consumed 1.139s CPU time. Sep 13 00:42:56.601323 env[1201]: time="2025-09-13T00:42:56.601102141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:56.621719 env[1201]: time="2025-09-13T00:42:56.621583354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:56.626555 env[1201]: time="2025-09-13T00:42:56.626484533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:56.629540 env[1201]: time="2025-09-13T00:42:56.629495509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:56.630235 env[1201]: time="2025-09-13T00:42:56.630175955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:42:56.630968 env[1201]: time="2025-09-13T00:42:56.630936561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:42:58.214210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301130696.mount: Deactivated successfully. Sep 13 00:42:59.256728 env[1201]: time="2025-09-13T00:42:59.256656766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.258506 env[1201]: time="2025-09-13T00:42:59.258432014Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.259986 env[1201]: time="2025-09-13T00:42:59.259955310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.261361 env[1201]: time="2025-09-13T00:42:59.261328816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:42:59.261789 env[1201]: time="2025-09-13T00:42:59.261756598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:42:59.262527 env[1201]: time="2025-09-13T00:42:59.262503288Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:43:00.329973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457511994.mount: Deactivated successfully. Sep 13 00:43:01.877663 env[1201]: time="2025-09-13T00:43:01.877574826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.879804 env[1201]: time="2025-09-13T00:43:01.879756417Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.881881 env[1201]: time="2025-09-13T00:43:01.881824464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.883822 env[1201]: time="2025-09-13T00:43:01.883761987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:01.884685 env[1201]: time="2025-09-13T00:43:01.884657456Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:43:01.885327 env[1201]: time="2025-09-13T00:43:01.885269093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:43:02.637837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041680958.mount: Deactivated successfully. Sep 13 00:43:02.647553 env[1201]: time="2025-09-13T00:43:02.647502651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:02.649494 env[1201]: time="2025-09-13T00:43:02.649469549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:02.651419 env[1201]: time="2025-09-13T00:43:02.651358300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:02.653072 env[1201]: time="2025-09-13T00:43:02.653036647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:02.653427 env[1201]: time="2025-09-13T00:43:02.653382496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:43:02.654033 env[1201]: time="2025-09-13T00:43:02.653985256Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:43:03.375835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154300087.mount: Deactivated successfully. Sep 13 00:43:06.076293 env[1201]: time="2025-09-13T00:43:06.076226187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:06.078619 env[1201]: time="2025-09-13T00:43:06.078567126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:06.080483 env[1201]: time="2025-09-13T00:43:06.080442512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:06.085382 env[1201]: time="2025-09-13T00:43:06.085338362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:06.086233 env[1201]: time="2025-09-13T00:43:06.086192614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:43:06.503606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:43:06.503854 systemd[1]: Stopped kubelet.service. Sep 13 00:43:06.503899 systemd[1]: kubelet.service: Consumed 1.139s CPU time. Sep 13 00:43:06.505759 systemd[1]: Starting kubelet.service... Sep 13 00:43:06.612978 systemd[1]: Started kubelet.service. Sep 13 00:43:06.650222 kubelet[1482]: E0913 00:43:06.650145 1482 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:43:06.652365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:43:06.652533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:43:08.747620 systemd[1]: Stopped kubelet.service. Sep 13 00:43:08.750239 systemd[1]: Starting kubelet.service... Sep 13 00:43:08.770388 systemd[1]: Reloading. Sep 13 00:43:08.840706 /usr/lib/systemd/system-generators/torcx-generator[1516]: time="2025-09-13T00:43:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:43:08.841078 /usr/lib/systemd/system-generators/torcx-generator[1516]: time="2025-09-13T00:43:08Z" level=info msg="torcx already run" Sep 13 00:43:09.222451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:43:09.222495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:43:09.243850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:09.344538 systemd[1]: Started kubelet.service. Sep 13 00:43:09.346173 systemd[1]: Stopping kubelet.service... Sep 13 00:43:09.346516 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:43:09.346701 systemd[1]: Stopped kubelet.service. Sep 13 00:43:09.348437 systemd[1]: Starting kubelet.service... Sep 13 00:43:09.449539 systemd[1]: Started kubelet.service. Sep 13 00:43:09.509069 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:09.509069 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:43:09.509069 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:09.509480 kubelet[1563]: I0913 00:43:09.509075 1563 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:43:10.309345 kubelet[1563]: I0913 00:43:10.309291 1563 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:43:10.309345 kubelet[1563]: I0913 00:43:10.309323 1563 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:43:10.309591 kubelet[1563]: I0913 00:43:10.309577 1563 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:43:10.365772 kubelet[1563]: E0913 00:43:10.365714 1563 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:43:10.366094 kubelet[1563]: I0913 00:43:10.366071 1563 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:43:10.377121 kubelet[1563]: E0913 00:43:10.377042 1563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:43:10.377121 kubelet[1563]: I0913 00:43:10.377094 1563 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:43:10.381726 kubelet[1563]: I0913 00:43:10.381701 1563 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:43:10.382245 kubelet[1563]: I0913 00:43:10.382208 1563 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:43:10.382390 kubelet[1563]: I0913 00:43:10.382236 1563 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:43:10.382568 kubelet[1563]: I0913 00:43:10.382406 1563 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:43:10.382568 kubelet[1563]: I0913 00:43:10.382415 1563 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:43:10.384216 kubelet[1563]: I0913 00:43:10.384187 1563 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:10.387518 kubelet[1563]: I0913 00:43:10.387494 1563 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:43:10.387518 kubelet[1563]: I0913 00:43:10.387519 1563 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:43:10.387610 kubelet[1563]: I0913 00:43:10.387564 1563 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:43:10.390195 kubelet[1563]: I0913 00:43:10.390174 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:43:10.426883 kubelet[1563]: E0913 00:43:10.426826 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:43:10.427798 kubelet[1563]: I0913 00:43:10.427777 1563 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:43:10.428271 kubelet[1563]: I0913 00:43:10.428242 1563 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:43:10.428923 kubelet[1563]: W0913 00:43:10.428897 1563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:43:10.430329 kubelet[1563]: E0913 00:43:10.429689 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:43:10.431260 kubelet[1563]: I0913 00:43:10.431234 1563 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:43:10.431324 kubelet[1563]: I0913 00:43:10.431289 1563 server.go:1289] "Started kubelet" Sep 13 00:43:10.432147 kubelet[1563]: I0913 00:43:10.431672 1563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:43:10.432147 kubelet[1563]: I0913 00:43:10.431881 1563 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:43:10.432147 kubelet[1563]: I0913 00:43:10.431987 1563 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:43:10.434202 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:43:10.434428 kubelet[1563]: I0913 00:43:10.434277 1563 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:43:10.434428 kubelet[1563]: I0913 00:43:10.434348 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:43:10.434921 kubelet[1563]: I0913 00:43:10.434893 1563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:43:10.435636 kubelet[1563]: E0913 00:43:10.435445 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.435636 kubelet[1563]: I0913 00:43:10.435488 1563 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:43:10.435712 kubelet[1563]: I0913 00:43:10.435641 1563 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:43:10.435712 kubelet[1563]: I0913 00:43:10.435683 1563 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:43:10.436152 kubelet[1563]: E0913 00:43:10.436123 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:43:10.436319 kubelet[1563]: I0913 00:43:10.436289 1563 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:43:10.436491 kubelet[1563]: I0913 00:43:10.436373 1563 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:43:10.444970 kubelet[1563]: E0913 00:43:10.444934 1563 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:43:10.445186 kubelet[1563]: I0913 00:43:10.445107 1563 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:43:10.452014 kubelet[1563]: E0913 00:43:10.451933 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Sep 13 00:43:10.454050 kubelet[1563]: E0913 00:43:10.449086 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b0d69fc4211a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:43:10.431256858 +0000 UTC m=+0.974223780,LastTimestamp:2025-09-13 00:43:10.431256858 +0000 UTC m=+0.974223780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:43:10.466798 kubelet[1563]: I0913 00:43:10.466757 1563 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:43:10.466798 kubelet[1563]: I0913 00:43:10.466798 1563 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:43:10.466935 kubelet[1563]: I0913 00:43:10.466824 1563 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:10.535845 kubelet[1563]: E0913 00:43:10.535805 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.636437 kubelet[1563]: E0913 00:43:10.636232 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.652949 kubelet[1563]: E0913 00:43:10.652912 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Sep 13 00:43:10.737154 kubelet[1563]: E0913 00:43:10.737124 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.837654 kubelet[1563]: E0913 00:43:10.837578 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.938625 kubelet[1563]: E0913 00:43:10.938474 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:10.980778 kubelet[1563]: I0913 00:43:10.980725 1563 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:43:10.981734 kubelet[1563]: I0913 00:43:10.981702 1563 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:43:10.981817 kubelet[1563]: I0913 00:43:10.981748 1563 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:43:10.981817 kubelet[1563]: I0913 00:43:10.981787 1563 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:43:10.981817 kubelet[1563]: I0913 00:43:10.981804 1563 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:43:10.981920 kubelet[1563]: E0913 00:43:10.981857 1563 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:43:10.982488 kubelet[1563]: E0913 00:43:10.982439 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:43:11.039420 kubelet[1563]: E0913 00:43:11.039383 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:11.053919 kubelet[1563]: E0913 00:43:11.053869 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Sep 13 00:43:11.081947 kubelet[1563]: E0913 00:43:11.081906 1563 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:43:11.084897 kubelet[1563]: I0913 00:43:11.084861 1563 policy_none.go:49] "None policy: Start" Sep 13 00:43:11.084897 kubelet[1563]: I0913 00:43:11.084888 1563 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:43:11.085004 kubelet[1563]: I0913 00:43:11.084904 1563 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:11.138279 systemd[1]: Created slice kubepods.slice. Sep 13 00:43:11.139582 kubelet[1563]: E0913 00:43:11.139563 1563 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:43:11.142324 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:43:11.144745 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:43:11.155171 kubelet[1563]: E0913 00:43:11.155140 1563 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:43:11.155370 kubelet[1563]: I0913 00:43:11.155324 1563 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:11.155370 kubelet[1563]: I0913 00:43:11.155346 1563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:11.155974 kubelet[1563]: I0913 00:43:11.155952 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:11.156438 kubelet[1563]: E0913 00:43:11.156414 1563 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:43:11.156525 kubelet[1563]: E0913 00:43:11.156483 1563 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:43:11.257782 kubelet[1563]: I0913 00:43:11.257653 1563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:11.258110 kubelet[1563]: E0913 00:43:11.258083 1563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:43:11.340643 kubelet[1563]: I0913 00:43:11.340607 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:11.340643 kubelet[1563]: I0913 00:43:11.340644 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:11.340761 kubelet[1563]: I0913 00:43:11.340664 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:11.425495 kubelet[1563]: E0913 00:43:11.425434 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:43:11.459650 kubelet[1563]: I0913 00:43:11.459613 1563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:11.459977 kubelet[1563]: E0913 00:43:11.459909 1563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:43:11.598005 kubelet[1563]: E0913 00:43:11.597954 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:43:11.855616 kubelet[1563]: E0913 00:43:11.855432 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="1.6s" Sep 13 00:43:11.861414 kubelet[1563]: I0913 00:43:11.861385 1563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:11.861673 kubelet[1563]: E0913 00:43:11.861642 1563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:43:11.869205 systemd[1]: Created slice kubepods-burstable-pod23fb9015749e6fd52d468f784a73f207.slice. Sep 13 00:43:11.877500 kubelet[1563]: E0913 00:43:11.877437 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:11.877968 kubelet[1563]: E0913 00:43:11.877938 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:11.878687 env[1201]: time="2025-09-13T00:43:11.878630323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb9015749e6fd52d468f784a73f207,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:11.881172 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:43:11.882476 kubelet[1563]: E0913 00:43:11.882432 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:11.901849 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:43:11.903324 kubelet[1563]: E0913 00:43:11.903291 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:11.917188 kubelet[1563]: E0913 00:43:11.917118 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:43:11.944169 kubelet[1563]: I0913 00:43:11.944093 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:11.944169 kubelet[1563]: I0913 00:43:11.944136 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:11.944169 kubelet[1563]: I0913 00:43:11.944158 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:11.944169 kubelet[1563]: I0913 00:43:11.944177 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:11.944424 kubelet[1563]: I0913 00:43:11.944265 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:11.944424 kubelet[1563]: I0913 00:43:11.944302 1563 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:12.183342 kubelet[1563]: E0913 00:43:12.183209 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.183910 env[1201]: time="2025-09-13T00:43:12.183851062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:12.204616 kubelet[1563]: E0913 00:43:12.204524 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.205305 env[1201]: time="2025-09-13T00:43:12.205267648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:12.238116 kubelet[1563]: E0913 00:43:12.238048 1563 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:43:12.403074 kubelet[1563]: E0913 00:43:12.403009 1563 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:43:12.499760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191353398.mount: Deactivated successfully. Sep 13 00:43:12.505594 env[1201]: time="2025-09-13T00:43:12.505544837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.509192 env[1201]: time="2025-09-13T00:43:12.509156760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.510181 env[1201]: time="2025-09-13T00:43:12.510149812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.511850 env[1201]: time="2025-09-13T00:43:12.511813982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.513033 env[1201]: time="2025-09-13T00:43:12.512986460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.514324 env[1201]: time="2025-09-13T00:43:12.514291397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.515725 env[1201]: time="2025-09-13T00:43:12.515696211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.517026 env[1201]: time="2025-09-13T00:43:12.516992882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.518324 env[1201]: time="2025-09-13T00:43:12.518296857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.522546 env[1201]: time="2025-09-13T00:43:12.522449554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.523370 env[1201]: time="2025-09-13T00:43:12.523320677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.533943 env[1201]: time="2025-09-13T00:43:12.533876279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:12.566971 env[1201]: time="2025-09-13T00:43:12.566850616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:12.567204 env[1201]: time="2025-09-13T00:43:12.566936617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:12.567204 env[1201]: time="2025-09-13T00:43:12.567182889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:12.567423 env[1201]: time="2025-09-13T00:43:12.567370090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/537b6ad3401cf1b99ff27e5f3f7505b8de29bb3e2d6f553248b6398bf9e82b79 pid=1609 runtime=io.containerd.runc.v2 Sep 13 00:43:12.584188 systemd[1]: Started cri-containerd-537b6ad3401cf1b99ff27e5f3f7505b8de29bb3e2d6f553248b6398bf9e82b79.scope. Sep 13 00:43:12.592686 env[1201]: time="2025-09-13T00:43:12.592427512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:12.592686 env[1201]: time="2025-09-13T00:43:12.592484389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:12.592686 env[1201]: time="2025-09-13T00:43:12.592493787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:12.592686 env[1201]: time="2025-09-13T00:43:12.592631375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a248c21ccf16a4210514864378ed1bf760846631765253241db4db158d81bdd8 pid=1636 runtime=io.containerd.runc.v2 Sep 13 00:43:12.597379 env[1201]: time="2025-09-13T00:43:12.597297193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:12.597379 env[1201]: time="2025-09-13T00:43:12.597327279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:12.597379 env[1201]: time="2025-09-13T00:43:12.597336847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:12.597679 env[1201]: time="2025-09-13T00:43:12.597632071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d876205603e025171402b2ddbaa42f45c271c8bb906a399de0d0675edabc04b1 pid=1647 runtime=io.containerd.runc.v2 Sep 13 00:43:12.629481 systemd[1]: Started cri-containerd-a248c21ccf16a4210514864378ed1bf760846631765253241db4db158d81bdd8.scope. Sep 13 00:43:12.637633 systemd[1]: Started cri-containerd-d876205603e025171402b2ddbaa42f45c271c8bb906a399de0d0675edabc04b1.scope. Sep 13 00:43:12.664860 env[1201]: time="2025-09-13T00:43:12.663340735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb9015749e6fd52d468f784a73f207,Namespace:kube-system,Attempt:0,} returns sandbox id \"537b6ad3401cf1b99ff27e5f3f7505b8de29bb3e2d6f553248b6398bf9e82b79\"" Sep 13 00:43:12.664994 kubelet[1563]: I0913 00:43:12.663879 1563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:12.664994 kubelet[1563]: E0913 00:43:12.664331 1563 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 13 00:43:12.664994 kubelet[1563]: E0913 00:43:12.664810 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.675593 env[1201]: time="2025-09-13T00:43:12.669437225Z" level=info msg="CreateContainer within sandbox \"537b6ad3401cf1b99ff27e5f3f7505b8de29bb3e2d6f553248b6398bf9e82b79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:43:12.689356 env[1201]: time="2025-09-13T00:43:12.688820790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a248c21ccf16a4210514864378ed1bf760846631765253241db4db158d81bdd8\"" Sep 13 00:43:12.689429 kubelet[1563]: E0913 00:43:12.689328 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.709773 env[1201]: time="2025-09-13T00:43:12.709731928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d876205603e025171402b2ddbaa42f45c271c8bb906a399de0d0675edabc04b1\"" Sep 13 00:43:12.710513 kubelet[1563]: E0913 00:43:12.710488 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.713654 env[1201]: time="2025-09-13T00:43:12.713630849Z" level=info msg="CreateContainer within sandbox \"a248c21ccf16a4210514864378ed1bf760846631765253241db4db158d81bdd8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:43:12.716379 env[1201]: time="2025-09-13T00:43:12.716355648Z" level=info msg="CreateContainer within sandbox \"d876205603e025171402b2ddbaa42f45c271c8bb906a399de0d0675edabc04b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:43:12.730570 env[1201]: time="2025-09-13T00:43:12.730524444Z" level=info msg="CreateContainer within sandbox \"537b6ad3401cf1b99ff27e5f3f7505b8de29bb3e2d6f553248b6398bf9e82b79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6888cab4f40ff03f1ae6a462ec54af23f082021d88e20b62c2fe83416ee30b9f\"" Sep 13 00:43:12.731001 env[1201]: time="2025-09-13T00:43:12.730977854Z" level=info msg="StartContainer for \"6888cab4f40ff03f1ae6a462ec54af23f082021d88e20b62c2fe83416ee30b9f\"" Sep 13 00:43:12.736136 env[1201]: time="2025-09-13T00:43:12.736088426Z" level=info msg="CreateContainer within sandbox \"a248c21ccf16a4210514864378ed1bf760846631765253241db4db158d81bdd8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ef6b3b7bb707b94e625314c88b224330365de2cb1516cc312550001c16d1ec5\"" Sep 13 00:43:12.736720 env[1201]: time="2025-09-13T00:43:12.736697098Z" level=info msg="StartContainer for \"6ef6b3b7bb707b94e625314c88b224330365de2cb1516cc312550001c16d1ec5\"" Sep 13 00:43:12.738972 env[1201]: time="2025-09-13T00:43:12.738930846Z" level=info msg="CreateContainer within sandbox \"d876205603e025171402b2ddbaa42f45c271c8bb906a399de0d0675edabc04b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d7de24696d9da1292862af92cde9e9071887507291991e1f17d4267375afc685\"" Sep 13 00:43:12.739313 env[1201]: time="2025-09-13T00:43:12.739289268Z" level=info msg="StartContainer for \"d7de24696d9da1292862af92cde9e9071887507291991e1f17d4267375afc685\"" Sep 13 00:43:12.745401 systemd[1]: Started cri-containerd-6888cab4f40ff03f1ae6a462ec54af23f082021d88e20b62c2fe83416ee30b9f.scope. Sep 13 00:43:12.761539 systemd[1]: Started cri-containerd-6ef6b3b7bb707b94e625314c88b224330365de2cb1516cc312550001c16d1ec5.scope. Sep 13 00:43:12.766038 systemd[1]: Started cri-containerd-d7de24696d9da1292862af92cde9e9071887507291991e1f17d4267375afc685.scope. Sep 13 00:43:12.814677 env[1201]: time="2025-09-13T00:43:12.814628689Z" level=info msg="StartContainer for \"6888cab4f40ff03f1ae6a462ec54af23f082021d88e20b62c2fe83416ee30b9f\" returns successfully" Sep 13 00:43:12.831747 env[1201]: time="2025-09-13T00:43:12.831648712Z" level=info msg="StartContainer for \"6ef6b3b7bb707b94e625314c88b224330365de2cb1516cc312550001c16d1ec5\" returns successfully" Sep 13 00:43:12.835955 env[1201]: time="2025-09-13T00:43:12.835865619Z" level=info msg="StartContainer for \"d7de24696d9da1292862af92cde9e9071887507291991e1f17d4267375afc685\" returns successfully" Sep 13 00:43:12.989133 kubelet[1563]: E0913 00:43:12.989076 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:12.989370 kubelet[1563]: E0913 00:43:12.989243 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.990922 kubelet[1563]: E0913 00:43:12.990897 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:12.991046 kubelet[1563]: E0913 00:43:12.991008 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:12.992737 kubelet[1563]: E0913 00:43:12.992697 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:12.992837 kubelet[1563]: E0913 00:43:12.992815 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995096 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995218 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995424 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995510 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995673 1563 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:43:13.995830 kubelet[1563]: E0913 00:43:13.995765 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:14.266247 kubelet[1563]: I0913 00:43:14.266114 1563 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:14.374198 kubelet[1563]: E0913 00:43:14.374146 1563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:43:14.424517 kubelet[1563]: I0913 00:43:14.424452 1563 apiserver.go:52] "Watching apiserver" Sep 13 00:43:14.436212 kubelet[1563]: I0913 00:43:14.436164 1563 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:43:14.569372 kubelet[1563]: I0913 00:43:14.569326 1563 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:43:14.569372 kubelet[1563]: E0913 00:43:14.569371 1563 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:43:14.638034 kubelet[1563]: I0913 00:43:14.637951 1563 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:14.821584 kubelet[1563]: E0913 00:43:14.821438 1563 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:14.821803 kubelet[1563]: I0913 00:43:14.821784 1563 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:14.823490 kubelet[1563]: E0913 00:43:14.823443 1563 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:14.823490 kubelet[1563]: I0913 00:43:14.823489 1563 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:14.824871 kubelet[1563]: E0913 00:43:14.824835 1563 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:14.995164 kubelet[1563]: I0913 00:43:14.995122 1563 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:14.997120 kubelet[1563]: E0913 00:43:14.997077 1563 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:14.997400 kubelet[1563]: E0913 00:43:14.997228 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:16.179714 kubelet[1563]: I0913 00:43:16.179675 1563 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:16.184035 kubelet[1563]: E0913 00:43:16.183987 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:16.999889 kubelet[1563]: E0913 00:43:16.999809 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:17.364245 systemd[1]: Reloading. Sep 13 00:43:17.452782 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-09-13T00:43:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:43:17.453397 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-09-13T00:43:17Z" level=info msg="torcx already run" Sep 13 00:43:17.573837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:43:17.573865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:43:17.592381 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:17.686652 systemd[1]: Stopping kubelet.service... Sep 13 00:43:17.705893 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:43:17.706125 systemd[1]: Stopped kubelet.service. Sep 13 00:43:17.706182 systemd[1]: kubelet.service: Consumed 1.533s CPU time. Sep 13 00:43:17.708129 systemd[1]: Starting kubelet.service... Sep 13 00:43:17.829694 systemd[1]: Started kubelet.service. Sep 13 00:43:17.866401 kubelet[1914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:17.866401 kubelet[1914]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:43:17.866401 kubelet[1914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:17.866929 kubelet[1914]: I0913 00:43:17.866490 1914 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:43:17.872595 kubelet[1914]: I0913 00:43:17.872561 1914 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:43:17.872595 kubelet[1914]: I0913 00:43:17.872584 1914 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:43:17.872763 kubelet[1914]: I0913 00:43:17.872752 1914 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:43:17.873888 kubelet[1914]: I0913 00:43:17.873853 1914 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:43:17.875847 kubelet[1914]: I0913 00:43:17.875817 1914 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:43:17.879843 kubelet[1914]: E0913 00:43:17.879813 1914 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:43:17.879843 kubelet[1914]: I0913 00:43:17.879846 1914 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:43:17.884112 kubelet[1914]: I0913 00:43:17.884071 1914 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:43:17.884325 kubelet[1914]: I0913 00:43:17.884291 1914 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:43:17.884504 kubelet[1914]: I0913 00:43:17.884320 1914 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:43:17.884622 kubelet[1914]: I0913 00:43:17.884513 1914 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:43:17.884622 kubelet[1914]: I0913 00:43:17.884525 1914 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:43:17.884622 kubelet[1914]: I0913 00:43:17.884595 1914 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:17.884765 kubelet[1914]: I0913 00:43:17.884747 1914 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:43:17.884765 kubelet[1914]: I0913 00:43:17.884762 1914 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:43:17.884876 kubelet[1914]: I0913 00:43:17.884782 1914 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:43:17.884876 kubelet[1914]: I0913 00:43:17.884800 1914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:43:17.885702 kubelet[1914]: I0913 00:43:17.885677 1914 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:43:17.886312 kubelet[1914]: I0913 00:43:17.886285 1914 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:43:17.896011 kubelet[1914]: I0913 00:43:17.895979 1914 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:43:17.896159 kubelet[1914]: I0913 00:43:17.896034 1914 server.go:1289] "Started kubelet" Sep 13 00:43:17.896361 kubelet[1914]: I0913 00:43:17.896318 1914 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:43:17.896597 kubelet[1914]: I0913 00:43:17.896542 1914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:43:17.896875 kubelet[1914]: I0913 00:43:17.896838 1914 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:43:17.897685 kubelet[1914]: I0913 00:43:17.897660 1914 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:43:17.901534 kubelet[1914]: I0913 00:43:17.901507 1914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:43:17.902480 kubelet[1914]: I0913 00:43:17.902184 1914 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:43:17.902480 kubelet[1914]: I0913 00:43:17.902273 1914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:43:17.904270 kubelet[1914]: I0913 00:43:17.904249 1914 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:43:17.904559 kubelet[1914]: I0913 00:43:17.904543 1914 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:43:17.906706 kubelet[1914]: E0913 00:43:17.906683 1914 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:43:17.907437 kubelet[1914]: I0913 00:43:17.907416 1914 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:43:17.907565 kubelet[1914]: I0913 00:43:17.907548 1914 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:43:17.907753 kubelet[1914]: I0913 00:43:17.907726 1914 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:43:17.919111 kubelet[1914]: I0913 00:43:17.918941 1914 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:43:17.920147 kubelet[1914]: I0913 00:43:17.920102 1914 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:43:17.920216 kubelet[1914]: I0913 00:43:17.920152 1914 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:43:17.920216 kubelet[1914]: I0913 00:43:17.920182 1914 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:43:17.920216 kubelet[1914]: I0913 00:43:17.920194 1914 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:43:17.920308 kubelet[1914]: E0913 00:43:17.920241 1914 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:43:17.934123 kubelet[1914]: I0913 00:43:17.934094 1914 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:43:17.934319 kubelet[1914]: I0913 00:43:17.934301 1914 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:43:17.934411 kubelet[1914]: I0913 00:43:17.934398 1914 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:17.934645 kubelet[1914]: I0913 00:43:17.934630 1914 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:43:17.934753 kubelet[1914]: I0913 00:43:17.934723 1914 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:43:17.934830 kubelet[1914]: I0913 00:43:17.934816 1914 policy_none.go:49] "None policy: Start" Sep 13 00:43:17.934946 kubelet[1914]: I0913 00:43:17.934931 1914 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:43:17.935037 kubelet[1914]: I0913 00:43:17.935022 1914 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:17.935218 kubelet[1914]: I0913 00:43:17.935203 1914 state_mem.go:75] "Updated machine memory state" Sep 13 00:43:17.938424 kubelet[1914]: E0913 00:43:17.938357 1914 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:43:17.938582 kubelet[1914]: I0913 00:43:17.938554 1914 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:17.939086 kubelet[1914]: I0913 00:43:17.938575 1914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:17.939323 kubelet[1914]: I0913 00:43:17.939276 1914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:17.940996 kubelet[1914]: E0913 00:43:17.940977 1914 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:43:18.021948 kubelet[1914]: I0913 00:43:18.021896 1914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:18.022106 kubelet[1914]: I0913 00:43:18.022019 1914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:18.022106 kubelet[1914]: I0913 00:43:18.022080 1914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.046321 kubelet[1914]: I0913 00:43:18.046282 1914 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:43:18.205890 kubelet[1914]: I0913 00:43:18.205506 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:18.205890 kubelet[1914]: I0913 00:43:18.205551 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:18.205890 kubelet[1914]: I0913 00:43:18.205574 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.205890 kubelet[1914]: I0913 00:43:18.205588 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.205890 kubelet[1914]: I0913 00:43:18.205623 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.206132 kubelet[1914]: I0913 00:43:18.205636 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.206132 kubelet[1914]: I0913 00:43:18.205652 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:18.206132 kubelet[1914]: I0913 00:43:18.205745 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:43:18.206132 kubelet[1914]: I0913 00:43:18.205803 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:18.352080 kubelet[1914]: E0913 00:43:18.352032 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:18.380733 kubelet[1914]: E0913 00:43:18.378948 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:18.380733 kubelet[1914]: E0913 00:43:18.379152 1914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:18.380733 kubelet[1914]: E0913 00:43:18.379252 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:18.403977 kubelet[1914]: I0913 00:43:18.403910 1914 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:43:18.404207 kubelet[1914]: I0913 00:43:18.404017 1914 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:43:18.511667 sudo[1954]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:43:18.511882 sudo[1954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:43:18.886072 kubelet[1914]: I0913 00:43:18.886006 1914 apiserver.go:52] "Watching apiserver" Sep 13 00:43:18.904950 kubelet[1914]: I0913 00:43:18.904892 1914 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:43:18.930732 kubelet[1914]: I0913 00:43:18.930695 1914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:18.930833 kubelet[1914]: E0913 00:43:18.930735 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:18.933687 kubelet[1914]: I0913 00:43:18.930960 1914 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:19.038667 sudo[1954]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:19.123606 kubelet[1914]: E0913 00:43:19.123540 1914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:43:19.123838 kubelet[1914]: E0913 00:43:19.123809 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:19.140085 kubelet[1914]: E0913 00:43:19.139938 1914 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:43:19.140252 kubelet[1914]: E0913 00:43:19.140139 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:19.298269 kubelet[1914]: I0913 00:43:19.298153 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.298102601 podStartE2EDuration="1.298102601s" podCreationTimestamp="2025-09-13 00:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:19.297580929 +0000 UTC m=+1.458890742" watchObservedRunningTime="2025-09-13 00:43:19.298102601 +0000 UTC m=+1.459412404" Sep 13 00:43:19.298508 kubelet[1914]: I0913 00:43:19.298358 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.298349611 podStartE2EDuration="1.298349611s" podCreationTimestamp="2025-09-13 00:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:19.123646744 +0000 UTC m=+1.284956547" watchObservedRunningTime="2025-09-13 00:43:19.298349611 +0000 UTC m=+1.459659414" Sep 13 00:43:19.931848 kubelet[1914]: E0913 00:43:19.931814 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:19.932278 kubelet[1914]: E0913 00:43:19.931921 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:21.769357 kubelet[1914]: E0913 00:43:21.769307 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:21.910002 sudo[1302]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:21.914812 sshd[1298]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:21.917570 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:49456.service: Deactivated successfully. Sep 13 00:43:21.918395 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:43:21.918563 systemd[1]: session-5.scope: Consumed 5.398s CPU time. Sep 13 00:43:21.919071 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:43:21.919837 systemd-logind[1189]: Removed session 5. Sep 13 00:43:22.238141 kubelet[1914]: I0913 00:43:22.238106 1914 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:43:22.238668 env[1201]: time="2025-09-13T00:43:22.238600097Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:43:22.238964 kubelet[1914]: I0913 00:43:22.238838 1914 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:43:22.961493 kubelet[1914]: E0913 00:43:22.961414 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.136565 kubelet[1914]: I0913 00:43:23.136452 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.136431447 podStartE2EDuration="7.136431447s" podCreationTimestamp="2025-09-13 00:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:19.371133235 +0000 UTC m=+1.532443038" watchObservedRunningTime="2025-09-13 00:43:23.136431447 +0000 UTC m=+5.297741240" Sep 13 00:43:23.160945 systemd[1]: Created slice kubepods-besteffort-pod532c7451_d655_4c16_972a_f07e44b04e44.slice. Sep 13 00:43:23.171052 systemd[1]: Created slice kubepods-burstable-pod22bd6160_3044_44ca_9cec_d99d44bc2424.slice. Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238560 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cni-path\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238604 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-etc-cni-netd\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238625 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-lib-modules\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238639 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22bd6160-3044-44ca-9cec-d99d44bc2424-clustermesh-secrets\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238656 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmh9z\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-kube-api-access-fmh9z\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.238703 kubelet[1914]: I0913 00:43:23.238672 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/532c7451-d655-4c16-972a-f07e44b04e44-kube-proxy\") pod \"kube-proxy-2pl5t\" (UID: \"532c7451-d655-4c16-972a-f07e44b04e44\") " pod="kube-system/kube-proxy-2pl5t" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238685 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/532c7451-d655-4c16-972a-f07e44b04e44-xtables-lock\") pod \"kube-proxy-2pl5t\" (UID: \"532c7451-d655-4c16-972a-f07e44b04e44\") " pod="kube-system/kube-proxy-2pl5t" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238697 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tt2n\" (UniqueName: \"kubernetes.io/projected/532c7451-d655-4c16-972a-f07e44b04e44-kube-api-access-5tt2n\") pod \"kube-proxy-2pl5t\" (UID: \"532c7451-d655-4c16-972a-f07e44b04e44\") " pod="kube-system/kube-proxy-2pl5t" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238712 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-run\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238724 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-bpf-maps\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238782 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-hostproc\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239012 kubelet[1914]: I0913 00:43:23.238820 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-xtables-lock\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239158 kubelet[1914]: I0913 00:43:23.238841 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-config-path\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239158 kubelet[1914]: I0913 00:43:23.238854 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-net\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239158 kubelet[1914]: I0913 00:43:23.238877 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-kernel\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239158 kubelet[1914]: I0913 00:43:23.238901 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/532c7451-d655-4c16-972a-f07e44b04e44-lib-modules\") pod \"kube-proxy-2pl5t\" (UID: \"532c7451-d655-4c16-972a-f07e44b04e44\") " pod="kube-system/kube-proxy-2pl5t" Sep 13 00:43:23.239158 kubelet[1914]: I0913 00:43:23.238916 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-cgroup\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.239276 kubelet[1914]: I0913 00:43:23.238944 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-hubble-tls\") pod \"cilium-szk29\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " pod="kube-system/cilium-szk29" Sep 13 00:43:23.340389 kubelet[1914]: I0913 00:43:23.340337 1914 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:43:23.375323 systemd[1]: Created slice kubepods-besteffort-poda1f82b3a_8528_49a1_809f_d6f516e03c01.slice. Sep 13 00:43:23.440379 kubelet[1914]: I0913 00:43:23.440305 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4szp\" (UniqueName: \"kubernetes.io/projected/a1f82b3a-8528-49a1-809f-d6f516e03c01-kube-api-access-r4szp\") pod \"cilium-operator-6c4d7847fc-g5fc7\" (UID: \"a1f82b3a-8528-49a1-809f-d6f516e03c01\") " pod="kube-system/cilium-operator-6c4d7847fc-g5fc7" Sep 13 00:43:23.440379 kubelet[1914]: I0913 00:43:23.440379 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1f82b3a-8528-49a1-809f-d6f516e03c01-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g5fc7\" (UID: \"a1f82b3a-8528-49a1-809f-d6f516e03c01\") " pod="kube-system/cilium-operator-6c4d7847fc-g5fc7" Sep 13 00:43:23.468406 kubelet[1914]: E0913 00:43:23.468365 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.469082 env[1201]: time="2025-09-13T00:43:23.469022935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2pl5t,Uid:532c7451-d655-4c16-972a-f07e44b04e44,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:23.473606 kubelet[1914]: E0913 00:43:23.473582 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.474029 env[1201]: time="2025-09-13T00:43:23.473961538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szk29,Uid:22bd6160-3044-44ca-9cec-d99d44bc2424,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:23.503020 env[1201]: time="2025-09-13T00:43:23.502836474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:23.503649 env[1201]: time="2025-09-13T00:43:23.503556439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:23.503649 env[1201]: time="2025-09-13T00:43:23.503573061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:23.503939 env[1201]: time="2025-09-13T00:43:23.503886745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99c7b2cf6300a0eea215bfc5775818a3c09720fd9e52ea11085f2eefc9fc8ed1 pid=2010 runtime=io.containerd.runc.v2 Sep 13 00:43:23.505474 env[1201]: time="2025-09-13T00:43:23.505379154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:23.505474 env[1201]: time="2025-09-13T00:43:23.505434139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:23.505474 env[1201]: time="2025-09-13T00:43:23.505447394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:23.505900 env[1201]: time="2025-09-13T00:43:23.505832384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42 pid=2028 runtime=io.containerd.runc.v2 Sep 13 00:43:23.517006 systemd[1]: Started cri-containerd-212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42.scope. Sep 13 00:43:23.519573 systemd[1]: Started cri-containerd-99c7b2cf6300a0eea215bfc5775818a3c09720fd9e52ea11085f2eefc9fc8ed1.scope. Sep 13 00:43:23.545111 kubelet[1914]: E0913 00:43:23.545043 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.554588 env[1201]: time="2025-09-13T00:43:23.554513311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szk29,Uid:22bd6160-3044-44ca-9cec-d99d44bc2424,Namespace:kube-system,Attempt:0,} returns sandbox id \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\"" Sep 13 00:43:23.560156 kubelet[1914]: E0913 00:43:23.556978 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.560326 env[1201]: time="2025-09-13T00:43:23.558234686Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:43:23.563940 env[1201]: time="2025-09-13T00:43:23.563884378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2pl5t,Uid:532c7451-d655-4c16-972a-f07e44b04e44,Namespace:kube-system,Attempt:0,} returns sandbox id \"99c7b2cf6300a0eea215bfc5775818a3c09720fd9e52ea11085f2eefc9fc8ed1\"" Sep 13 00:43:23.565091 kubelet[1914]: E0913 00:43:23.565051 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.581614 env[1201]: time="2025-09-13T00:43:23.581539381Z" level=info msg="CreateContainer within sandbox \"99c7b2cf6300a0eea215bfc5775818a3c09720fd9e52ea11085f2eefc9fc8ed1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:43:23.609220 env[1201]: time="2025-09-13T00:43:23.609132797Z" level=info msg="CreateContainer within sandbox \"99c7b2cf6300a0eea215bfc5775818a3c09720fd9e52ea11085f2eefc9fc8ed1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab67e72d1c91b2f851195884c511c9e04b80f054d3edd8de2b47e5fbbfbf2ff0\"" Sep 13 00:43:23.610028 env[1201]: time="2025-09-13T00:43:23.609958042Z" level=info msg="StartContainer for \"ab67e72d1c91b2f851195884c511c9e04b80f054d3edd8de2b47e5fbbfbf2ff0\"" Sep 13 00:43:23.626440 systemd[1]: Started cri-containerd-ab67e72d1c91b2f851195884c511c9e04b80f054d3edd8de2b47e5fbbfbf2ff0.scope. Sep 13 00:43:23.657956 env[1201]: time="2025-09-13T00:43:23.657849211Z" level=info msg="StartContainer for \"ab67e72d1c91b2f851195884c511c9e04b80f054d3edd8de2b47e5fbbfbf2ff0\" returns successfully" Sep 13 00:43:23.684431 kubelet[1914]: E0913 00:43:23.684365 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.685034 env[1201]: time="2025-09-13T00:43:23.684965793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g5fc7,Uid:a1f82b3a-8528-49a1-809f-d6f516e03c01,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:23.709005 env[1201]: time="2025-09-13T00:43:23.708926411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:23.709005 env[1201]: time="2025-09-13T00:43:23.709000361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:23.709265 env[1201]: time="2025-09-13T00:43:23.709024758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:23.709518 env[1201]: time="2025-09-13T00:43:23.709436438Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc pid=2134 runtime=io.containerd.runc.v2 Sep 13 00:43:23.720398 systemd[1]: Started cri-containerd-7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc.scope. Sep 13 00:43:23.760682 env[1201]: time="2025-09-13T00:43:23.760561770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g5fc7,Uid:a1f82b3a-8528-49a1-809f-d6f516e03c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\"" Sep 13 00:43:23.761539 kubelet[1914]: E0913 00:43:23.761496 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.942043 kubelet[1914]: E0913 00:43:23.941650 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.942043 kubelet[1914]: E0913 00:43:23.942031 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.942437 kubelet[1914]: E0913 00:43:23.942201 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:23.971911 kubelet[1914]: I0913 00:43:23.971828 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2pl5t" podStartSLOduration=0.971804746 podStartE2EDuration="971.804746ms" podCreationTimestamp="2025-09-13 00:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:23.971425867 +0000 UTC m=+6.132735670" watchObservedRunningTime="2025-09-13 00:43:23.971804746 +0000 UTC m=+6.133114549" Sep 13 00:43:24.943050 kubelet[1914]: E0913 00:43:24.942985 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:27.525652 update_engine[1192]: I0913 00:43:27.525566 1192 update_attempter.cc:509] Updating boot flags... Sep 13 00:43:28.974135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810441548.mount: Deactivated successfully. Sep 13 00:43:31.784126 kubelet[1914]: E0913 00:43:31.784077 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:31.953687 kubelet[1914]: E0913 00:43:31.953629 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:33.984405 env[1201]: time="2025-09-13T00:43:33.984316110Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:33.987741 env[1201]: time="2025-09-13T00:43:33.987675286Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:33.994673 env[1201]: time="2025-09-13T00:43:33.994616426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:33.995226 env[1201]: time="2025-09-13T00:43:33.995195479Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:43:33.996511 env[1201]: time="2025-09-13T00:43:33.996450485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:43:34.086840 env[1201]: time="2025-09-13T00:43:34.086758697Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:43:34.100275 env[1201]: time="2025-09-13T00:43:34.100230766Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\"" Sep 13 00:43:34.100848 env[1201]: time="2025-09-13T00:43:34.100815288Z" level=info msg="StartContainer for \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\"" Sep 13 00:43:34.122708 systemd[1]: run-containerd-runc-k8s.io-b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc-runc.zziyIr.mount: Deactivated successfully. Sep 13 00:43:34.124324 systemd[1]: Started cri-containerd-b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc.scope. Sep 13 00:43:34.148638 env[1201]: time="2025-09-13T00:43:34.148565021Z" level=info msg="StartContainer for \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\" returns successfully" Sep 13 00:43:34.156559 systemd[1]: cri-containerd-b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc.scope: Deactivated successfully. Sep 13 00:43:34.437862 env[1201]: time="2025-09-13T00:43:34.437800066Z" level=info msg="shim disconnected" id=b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc Sep 13 00:43:34.437862 env[1201]: time="2025-09-13T00:43:34.437857905Z" level=warning msg="cleaning up after shim disconnected" id=b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc namespace=k8s.io Sep 13 00:43:34.437862 env[1201]: time="2025-09-13T00:43:34.437868305Z" level=info msg="cleaning up dead shim" Sep 13 00:43:34.445520 env[1201]: time="2025-09-13T00:43:34.445448826Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2364 runtime=io.containerd.runc.v2\n" Sep 13 00:43:34.960140 kubelet[1914]: E0913 00:43:34.960099 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:34.969637 env[1201]: time="2025-09-13T00:43:34.969592435Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:43:34.988911 env[1201]: time="2025-09-13T00:43:34.988855823Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\"" Sep 13 00:43:34.989289 env[1201]: time="2025-09-13T00:43:34.989262810Z" level=info msg="StartContainer for \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\"" Sep 13 00:43:35.007171 systemd[1]: Started cri-containerd-1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b.scope. Sep 13 00:43:35.032738 env[1201]: time="2025-09-13T00:43:35.032676858Z" level=info msg="StartContainer for \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\" returns successfully" Sep 13 00:43:35.042201 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:43:35.042404 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:43:35.042583 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:43:35.044041 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:43:35.045117 systemd[1]: cri-containerd-1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b.scope: Deactivated successfully. Sep 13 00:43:35.055084 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:43:35.070043 env[1201]: time="2025-09-13T00:43:35.069980042Z" level=info msg="shim disconnected" id=1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b Sep 13 00:43:35.070043 env[1201]: time="2025-09-13T00:43:35.070024326Z" level=warning msg="cleaning up after shim disconnected" id=1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b namespace=k8s.io Sep 13 00:43:35.070043 env[1201]: time="2025-09-13T00:43:35.070032742Z" level=info msg="cleaning up dead shim" Sep 13 00:43:35.077054 env[1201]: time="2025-09-13T00:43:35.076974685Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2429 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:43:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 13 00:43:35.097121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc-rootfs.mount: Deactivated successfully. Sep 13 00:43:35.403197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090984509.mount: Deactivated successfully. Sep 13 00:43:35.963592 kubelet[1914]: E0913 00:43:35.963450 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:35.969090 env[1201]: time="2025-09-13T00:43:35.968999898Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:43:35.992788 env[1201]: time="2025-09-13T00:43:35.992697639Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\"" Sep 13 00:43:35.993738 env[1201]: time="2025-09-13T00:43:35.993686744Z" level=info msg="StartContainer for \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\"" Sep 13 00:43:36.011941 systemd[1]: Started cri-containerd-63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f.scope. Sep 13 00:43:36.043125 systemd[1]: cri-containerd-63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f.scope: Deactivated successfully. Sep 13 00:43:36.043915 env[1201]: time="2025-09-13T00:43:36.043857864Z" level=info msg="StartContainer for \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\" returns successfully" Sep 13 00:43:36.411352 env[1201]: time="2025-09-13T00:43:36.411241174Z" level=info msg="shim disconnected" id=63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f Sep 13 00:43:36.411352 env[1201]: time="2025-09-13T00:43:36.411341163Z" level=warning msg="cleaning up after shim disconnected" id=63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f namespace=k8s.io Sep 13 00:43:36.411352 env[1201]: time="2025-09-13T00:43:36.411357313Z" level=info msg="cleaning up dead shim" Sep 13 00:43:36.418912 env[1201]: time="2025-09-13T00:43:36.418846874Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2487 runtime=io.containerd.runc.v2\n" Sep 13 00:43:36.425576 env[1201]: time="2025-09-13T00:43:36.425523502Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:36.428702 env[1201]: time="2025-09-13T00:43:36.428653220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:36.430409 env[1201]: time="2025-09-13T00:43:36.430369994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:43:36.431075 env[1201]: time="2025-09-13T00:43:36.431029837Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:43:36.438371 env[1201]: time="2025-09-13T00:43:36.438326223Z" level=info msg="CreateContainer within sandbox \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:43:36.459334 env[1201]: time="2025-09-13T00:43:36.459284390Z" level=info msg="CreateContainer within sandbox \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\"" Sep 13 00:43:36.459843 env[1201]: time="2025-09-13T00:43:36.459820410Z" level=info msg="StartContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\"" Sep 13 00:43:36.476539 systemd[1]: Started cri-containerd-299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef.scope. Sep 13 00:43:36.509081 env[1201]: time="2025-09-13T00:43:36.509001690Z" level=info msg="StartContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" returns successfully" Sep 13 00:43:36.968594 kubelet[1914]: E0913 00:43:36.968545 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:36.970902 kubelet[1914]: E0913 00:43:36.970877 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:36.977296 env[1201]: time="2025-09-13T00:43:36.977224778Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:43:36.999696 env[1201]: time="2025-09-13T00:43:36.999610714Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\"" Sep 13 00:43:37.000371 env[1201]: time="2025-09-13T00:43:37.000315563Z" level=info msg="StartContainer for \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\"" Sep 13 00:43:37.009892 kubelet[1914]: I0913 00:43:37.009818 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g5fc7" podStartSLOduration=1.340110566 podStartE2EDuration="14.009792573s" podCreationTimestamp="2025-09-13 00:43:23 +0000 UTC" firstStartedPulling="2025-09-13 00:43:23.76229762 +0000 UTC m=+5.923607413" lastFinishedPulling="2025-09-13 00:43:36.431979617 +0000 UTC m=+18.593289420" observedRunningTime="2025-09-13 00:43:36.984645059 +0000 UTC m=+19.145954862" watchObservedRunningTime="2025-09-13 00:43:37.009792573 +0000 UTC m=+19.171102376" Sep 13 00:43:37.022843 systemd[1]: Started cri-containerd-d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8.scope. Sep 13 00:43:37.086063 systemd[1]: cri-containerd-d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8.scope: Deactivated successfully. Sep 13 00:43:37.087496 env[1201]: time="2025-09-13T00:43:37.087419021Z" level=info msg="StartContainer for \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\" returns successfully" Sep 13 00:43:37.098998 systemd[1]: run-containerd-runc-k8s.io-299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef-runc.vAhbyH.mount: Deactivated successfully. Sep 13 00:43:37.108674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8-rootfs.mount: Deactivated successfully. Sep 13 00:43:37.123422 env[1201]: time="2025-09-13T00:43:37.123345128Z" level=info msg="shim disconnected" id=d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8 Sep 13 00:43:37.123422 env[1201]: time="2025-09-13T00:43:37.123421141Z" level=warning msg="cleaning up after shim disconnected" id=d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8 namespace=k8s.io Sep 13 00:43:37.123422 env[1201]: time="2025-09-13T00:43:37.123434547Z" level=info msg="cleaning up dead shim" Sep 13 00:43:37.135491 env[1201]: time="2025-09-13T00:43:37.135387850Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:43:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2578 runtime=io.containerd.runc.v2\n" Sep 13 00:43:37.980900 kubelet[1914]: E0913 00:43:37.980847 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:37.981381 kubelet[1914]: E0913 00:43:37.981016 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:37.988045 env[1201]: time="2025-09-13T00:43:37.987970289Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:43:38.012497 env[1201]: time="2025-09-13T00:43:38.012425746Z" level=info msg="CreateContainer within sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\"" Sep 13 00:43:38.013225 env[1201]: time="2025-09-13T00:43:38.013193652Z" level=info msg="StartContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\"" Sep 13 00:43:38.028126 systemd[1]: Started cri-containerd-c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db.scope. Sep 13 00:43:38.138761 env[1201]: time="2025-09-13T00:43:38.138669156Z" level=info msg="StartContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" returns successfully" Sep 13 00:43:38.163626 systemd[1]: run-containerd-runc-k8s.io-c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db-runc.fHgh5W.mount: Deactivated successfully. Sep 13 00:43:38.278850 kubelet[1914]: I0913 00:43:38.278699 1914 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:43:38.601094 systemd[1]: Created slice kubepods-burstable-podb1610428_3d39_45c5_8a17_0d17d9f225fa.slice. Sep 13 00:43:38.727830 systemd[1]: Created slice kubepods-burstable-podef33aad8_328b_43a0_879b_d818ffd3f37d.slice. Sep 13 00:43:38.736285 kubelet[1914]: I0913 00:43:38.736223 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1610428-3d39-45c5-8a17-0d17d9f225fa-config-volume\") pod \"coredns-674b8bbfcf-8pvhz\" (UID: \"b1610428-3d39-45c5-8a17-0d17d9f225fa\") " pod="kube-system/coredns-674b8bbfcf-8pvhz" Sep 13 00:43:38.736285 kubelet[1914]: I0913 00:43:38.736257 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrtr\" (UniqueName: \"kubernetes.io/projected/b1610428-3d39-45c5-8a17-0d17d9f225fa-kube-api-access-kxrtr\") pod \"coredns-674b8bbfcf-8pvhz\" (UID: \"b1610428-3d39-45c5-8a17-0d17d9f225fa\") " pod="kube-system/coredns-674b8bbfcf-8pvhz" Sep 13 00:43:38.837536 kubelet[1914]: I0913 00:43:38.837444 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef33aad8-328b-43a0-879b-d818ffd3f37d-config-volume\") pod \"coredns-674b8bbfcf-wqqwh\" (UID: \"ef33aad8-328b-43a0-879b-d818ffd3f37d\") " pod="kube-system/coredns-674b8bbfcf-wqqwh" Sep 13 00:43:38.837536 kubelet[1914]: I0913 00:43:38.837535 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp9jf\" (UniqueName: \"kubernetes.io/projected/ef33aad8-328b-43a0-879b-d818ffd3f37d-kube-api-access-tp9jf\") pod \"coredns-674b8bbfcf-wqqwh\" (UID: \"ef33aad8-328b-43a0-879b-d818ffd3f37d\") " pod="kube-system/coredns-674b8bbfcf-wqqwh" Sep 13 00:43:38.906494 kubelet[1914]: E0913 00:43:38.906279 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:38.907532 env[1201]: time="2025-09-13T00:43:38.907448897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvhz,Uid:b1610428-3d39-45c5-8a17-0d17d9f225fa,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:38.985858 kubelet[1914]: E0913 00:43:38.985795 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:39.031020 kubelet[1914]: E0913 00:43:39.030970 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:39.032099 env[1201]: time="2025-09-13T00:43:39.032050381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wqqwh,Uid:ef33aad8-328b-43a0-879b-d818ffd3f37d,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:39.987726 kubelet[1914]: E0913 00:43:39.987659 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:40.152887 systemd-networkd[1023]: cilium_host: Link UP Sep 13 00:43:40.153076 systemd-networkd[1023]: cilium_net: Link UP Sep 13 00:43:40.156068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:43:40.156149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:43:40.156345 systemd-networkd[1023]: cilium_net: Gained carrier Sep 13 00:43:40.156614 systemd-networkd[1023]: cilium_host: Gained carrier Sep 13 00:43:40.235309 systemd-networkd[1023]: cilium_vxlan: Link UP Sep 13 00:43:40.235317 systemd-networkd[1023]: cilium_vxlan: Gained carrier Sep 13 00:43:40.252627 systemd-networkd[1023]: cilium_host: Gained IPv6LL Sep 13 00:43:40.442485 kernel: NET: Registered PF_ALG protocol family Sep 13 00:43:40.989981 kubelet[1914]: E0913 00:43:40.989931 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:41.011599 systemd-networkd[1023]: cilium_net: Gained IPv6LL Sep 13 00:43:41.042901 systemd-networkd[1023]: lxc_health: Link UP Sep 13 00:43:41.049834 systemd-networkd[1023]: lxc_health: Gained carrier Sep 13 00:43:41.050490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:43:41.445285 systemd-networkd[1023]: lxcd06a20d8b493: Link UP Sep 13 00:43:41.474489 kernel: eth0: renamed from tmpbd375 Sep 13 00:43:41.480768 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:43:41.480864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd06a20d8b493: link becomes ready Sep 13 00:43:41.480496 systemd-networkd[1023]: lxcd06a20d8b493: Gained carrier Sep 13 00:43:41.530178 kubelet[1914]: I0913 00:43:41.530076 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-szk29" podStartSLOduration=8.091719485 podStartE2EDuration="18.530049553s" podCreationTimestamp="2025-09-13 00:43:23 +0000 UTC" firstStartedPulling="2025-09-13 00:43:23.557906634 +0000 UTC m=+5.719216437" lastFinishedPulling="2025-09-13 00:43:33.996236702 +0000 UTC m=+16.157546505" observedRunningTime="2025-09-13 00:43:39.007503773 +0000 UTC m=+21.168813566" watchObservedRunningTime="2025-09-13 00:43:41.530049553 +0000 UTC m=+23.691359356" Sep 13 00:43:41.577010 systemd-networkd[1023]: lxc7bcbf5a8940a: Link UP Sep 13 00:43:41.578489 kernel: eth0: renamed from tmp1ad95 Sep 13 00:43:41.585422 systemd-networkd[1023]: lxc7bcbf5a8940a: Gained carrier Sep 13 00:43:41.585567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7bcbf5a8940a: link becomes ready Sep 13 00:43:41.590016 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Sep 13 00:43:41.991527 kubelet[1914]: E0913 00:43:41.991485 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:42.483667 systemd-networkd[1023]: lxc_health: Gained IPv6LL Sep 13 00:43:42.739659 systemd-networkd[1023]: lxcd06a20d8b493: Gained IPv6LL Sep 13 00:43:42.931688 systemd-networkd[1023]: lxc7bcbf5a8940a: Gained IPv6LL Sep 13 00:43:42.993021 kubelet[1914]: E0913 00:43:42.992879 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:43.994937 kubelet[1914]: E0913 00:43:43.994896 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:44.061213 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:44400.service. Sep 13 00:43:44.099513 sshd[3143]: Accepted publickey for core from 10.0.0.1 port 44400 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:44.101428 sshd[3143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:44.106443 systemd-logind[1189]: New session 6 of user core. Sep 13 00:43:44.107815 systemd[1]: Started session-6.scope. Sep 13 00:43:44.295112 sshd[3143]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:44.298438 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:44400.service: Deactivated successfully. Sep 13 00:43:44.299623 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:43:44.301761 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:43:44.303381 systemd-logind[1189]: Removed session 6. Sep 13 00:43:45.405852 env[1201]: time="2025-09-13T00:43:45.405761947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:45.405852 env[1201]: time="2025-09-13T00:43:45.405811019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:45.405852 env[1201]: time="2025-09-13T00:43:45.405822862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:45.406393 env[1201]: time="2025-09-13T00:43:45.406057914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b pid=3169 runtime=io.containerd.runc.v2 Sep 13 00:43:45.423149 systemd[1]: run-containerd-runc-k8s.io-bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b-runc.O9BBe4.mount: Deactivated successfully. Sep 13 00:43:45.425715 systemd[1]: Started cri-containerd-bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b.scope. Sep 13 00:43:45.437645 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:45.458101 env[1201]: time="2025-09-13T00:43:45.458026617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8pvhz,Uid:b1610428-3d39-45c5-8a17-0d17d9f225fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b\"" Sep 13 00:43:45.458910 kubelet[1914]: E0913 00:43:45.458863 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:45.468185 env[1201]: time="2025-09-13T00:43:45.468113542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:45.468349 env[1201]: time="2025-09-13T00:43:45.468198211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:45.468349 env[1201]: time="2025-09-13T00:43:45.468222276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:45.468443 env[1201]: time="2025-09-13T00:43:45.468407524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63 pid=3210 runtime=io.containerd.runc.v2 Sep 13 00:43:45.483770 systemd[1]: Started cri-containerd-1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63.scope. Sep 13 00:43:45.502255 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:43:45.522607 env[1201]: time="2025-09-13T00:43:45.522537903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wqqwh,Uid:ef33aad8-328b-43a0-879b-d818ffd3f37d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63\"" Sep 13 00:43:45.523292 kubelet[1914]: E0913 00:43:45.523258 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:45.714753 env[1201]: time="2025-09-13T00:43:45.714623347Z" level=info msg="CreateContainer within sandbox \"bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:45.926621 env[1201]: time="2025-09-13T00:43:45.926550885Z" level=info msg="CreateContainer within sandbox \"1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:43:46.409126 systemd[1]: run-containerd-runc-k8s.io-1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63-runc.5MJd1u.mount: Deactivated successfully. Sep 13 00:43:46.663412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917371801.mount: Deactivated successfully. Sep 13 00:43:47.408751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145425347.mount: Deactivated successfully. Sep 13 00:43:47.581507 env[1201]: time="2025-09-13T00:43:47.581402542Z" level=info msg="CreateContainer within sandbox \"bd37505aa14ede45180fbc2ea8c1ad1b0a2fc425d5cda099f1bcd39c0255dc7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f711af5ccfa8ae74b2c5ddb708e6c6b0ad54794938462b0a47e01ab9b1c6344\"" Sep 13 00:43:47.582277 env[1201]: time="2025-09-13T00:43:47.582244264Z" level=info msg="StartContainer for \"8f711af5ccfa8ae74b2c5ddb708e6c6b0ad54794938462b0a47e01ab9b1c6344\"" Sep 13 00:43:47.600197 systemd[1]: Started cri-containerd-8f711af5ccfa8ae74b2c5ddb708e6c6b0ad54794938462b0a47e01ab9b1c6344.scope. Sep 13 00:43:47.927346 env[1201]: time="2025-09-13T00:43:47.927271175Z" level=info msg="CreateContainer within sandbox \"1ad952d2605533097154e5fe3de84f34d1b93e60732bd355535956b3fbed5b63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f333f4c8cad3d3e057d9afee19a0b49b509ffce05ae4f58164dceb8fb9e6db64\"" Sep 13 00:43:47.927891 env[1201]: time="2025-09-13T00:43:47.927854602Z" level=info msg="StartContainer for \"f333f4c8cad3d3e057d9afee19a0b49b509ffce05ae4f58164dceb8fb9e6db64\"" Sep 13 00:43:47.952226 systemd[1]: Started cri-containerd-f333f4c8cad3d3e057d9afee19a0b49b509ffce05ae4f58164dceb8fb9e6db64.scope. Sep 13 00:43:48.112776 env[1201]: time="2025-09-13T00:43:48.112683047Z" level=info msg="StartContainer for \"8f711af5ccfa8ae74b2c5ddb708e6c6b0ad54794938462b0a47e01ab9b1c6344\" returns successfully" Sep 13 00:43:48.235594 env[1201]: time="2025-09-13T00:43:48.235383973Z" level=info msg="StartContainer for \"f333f4c8cad3d3e057d9afee19a0b49b509ffce05ae4f58164dceb8fb9e6db64\" returns successfully" Sep 13 00:43:48.240628 kubelet[1914]: E0913 00:43:48.240579 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:48.563255 kubelet[1914]: I0913 00:43:48.563182 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8pvhz" podStartSLOduration=25.563156445 podStartE2EDuration="25.563156445s" podCreationTimestamp="2025-09-13 00:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:48.56308462 +0000 UTC m=+30.724394424" watchObservedRunningTime="2025-09-13 00:43:48.563156445 +0000 UTC m=+30.724466248" Sep 13 00:43:49.241881 kubelet[1914]: E0913 00:43:49.241829 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:49.242252 kubelet[1914]: E0913 00:43:49.241845 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:49.300148 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:44404.service. Sep 13 00:43:49.333784 sshd[3322]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:49.335718 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:49.339745 systemd-logind[1189]: New session 7 of user core. Sep 13 00:43:49.340777 systemd[1]: Started session-7.scope. Sep 13 00:43:49.586650 sshd[3322]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:49.589085 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:44404.service: Deactivated successfully. Sep 13 00:43:49.590024 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:43:49.590956 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:43:49.591760 systemd-logind[1189]: Removed session 7. Sep 13 00:43:50.243708 kubelet[1914]: E0913 00:43:50.243483 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:50.244091 kubelet[1914]: E0913 00:43:50.243849 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:50.278814 kubelet[1914]: I0913 00:43:50.278723 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wqqwh" podStartSLOduration=27.27869563 podStartE2EDuration="27.27869563s" podCreationTimestamp="2025-09-13 00:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:49.334452347 +0000 UTC m=+31.495762150" watchObservedRunningTime="2025-09-13 00:43:50.27869563 +0000 UTC m=+32.440005433" Sep 13 00:43:51.244996 kubelet[1914]: E0913 00:43:51.244959 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:51.244996 kubelet[1914]: E0913 00:43:51.244965 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:43:54.591248 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:49684.service. Sep 13 00:43:54.620294 sshd[3353]: Accepted publickey for core from 10.0.0.1 port 49684 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:54.621267 sshd[3353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:54.624339 systemd-logind[1189]: New session 8 of user core. Sep 13 00:43:54.625305 systemd[1]: Started session-8.scope. Sep 13 00:43:54.730724 sshd[3353]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:54.732930 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:49684.service: Deactivated successfully. Sep 13 00:43:54.733781 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:43:54.734528 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:43:54.735275 systemd-logind[1189]: Removed session 8. Sep 13 00:43:59.736212 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:49700.service. Sep 13 00:43:59.767214 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 49700 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:43:59.768423 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:59.771827 systemd-logind[1189]: New session 9 of user core. Sep 13 00:43:59.772641 systemd[1]: Started session-9.scope. Sep 13 00:43:59.903083 sshd[3367]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:59.905589 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:49700.service: Deactivated successfully. Sep 13 00:43:59.906333 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:43:59.906975 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:43:59.907757 systemd-logind[1189]: Removed session 9. Sep 13 00:44:04.908262 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:55496.service. Sep 13 00:44:05.030924 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 55496 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:05.032521 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:05.036069 systemd-logind[1189]: New session 10 of user core. Sep 13 00:44:05.036887 systemd[1]: Started session-10.scope. Sep 13 00:44:05.154927 sshd[3381]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:05.157268 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:55496.service: Deactivated successfully. Sep 13 00:44:05.158107 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:44:05.158634 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:44:05.159496 systemd-logind[1189]: Removed session 10. Sep 13 00:44:10.159523 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:58064.service. Sep 13 00:44:10.190180 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:10.191314 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:10.194938 systemd-logind[1189]: New session 11 of user core. Sep 13 00:44:10.196012 systemd[1]: Started session-11.scope. Sep 13 00:44:10.328009 sshd[3395]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:10.331600 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:58064.service: Deactivated successfully. Sep 13 00:44:10.332293 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:44:10.332964 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:44:10.334354 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:58076.service. Sep 13 00:44:10.335321 systemd-logind[1189]: Removed session 11. Sep 13 00:44:10.365392 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:10.366926 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:10.370781 systemd-logind[1189]: New session 12 of user core. Sep 13 00:44:10.371690 systemd[1]: Started session-12.scope. Sep 13 00:44:10.576942 sshd[3409]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:10.580270 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:58076.service: Deactivated successfully. Sep 13 00:44:10.580936 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:44:10.581491 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:44:10.582841 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:58082.service. Sep 13 00:44:10.583845 systemd-logind[1189]: Removed session 12. Sep 13 00:44:10.617372 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:10.618818 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:10.623006 systemd-logind[1189]: New session 13 of user core. Sep 13 00:44:10.624091 systemd[1]: Started session-13.scope. Sep 13 00:44:10.851484 sshd[3421]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:10.854656 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:58082.service: Deactivated successfully. Sep 13 00:44:10.855621 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:44:10.856796 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:44:10.857663 systemd-logind[1189]: Removed session 13. Sep 13 00:44:15.856029 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:58090.service. Sep 13 00:44:15.885299 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 58090 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:15.886359 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:15.889868 systemd-logind[1189]: New session 14 of user core. Sep 13 00:44:15.890683 systemd[1]: Started session-14.scope. Sep 13 00:44:15.995405 sshd[3434]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:15.997572 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:58090.service: Deactivated successfully. Sep 13 00:44:15.998337 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:44:15.998952 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:44:15.999760 systemd-logind[1189]: Removed session 14. Sep 13 00:44:20.999856 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:42284.service. Sep 13 00:44:21.028981 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 42284 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:21.030069 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:21.033232 systemd-logind[1189]: New session 15 of user core. Sep 13 00:44:21.034047 systemd[1]: Started session-15.scope. Sep 13 00:44:21.139392 sshd[3450]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:21.141709 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:42284.service: Deactivated successfully. Sep 13 00:44:21.142567 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:44:21.143154 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:44:21.144001 systemd-logind[1189]: Removed session 15. Sep 13 00:44:26.144010 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:42298.service. Sep 13 00:44:26.173496 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 42298 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:26.174613 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:26.178441 systemd-logind[1189]: New session 16 of user core. Sep 13 00:44:26.179191 systemd[1]: Started session-16.scope. Sep 13 00:44:26.289039 sshd[3465]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:26.291865 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:42298.service: Deactivated successfully. Sep 13 00:44:26.292422 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:44:26.293058 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:44:26.294003 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:42302.service. Sep 13 00:44:26.294736 systemd-logind[1189]: Removed session 16. Sep 13 00:44:26.323939 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:26.325066 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:26.328484 systemd-logind[1189]: New session 17 of user core. Sep 13 00:44:26.329366 systemd[1]: Started session-17.scope. Sep 13 00:44:26.606954 sshd[3479]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:26.609865 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:42302.service: Deactivated successfully. Sep 13 00:44:26.610428 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:44:26.611077 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:44:26.612541 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:42304.service. Sep 13 00:44:26.613359 systemd-logind[1189]: Removed session 17. Sep 13 00:44:26.644356 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 42304 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:26.645332 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:26.649038 systemd-logind[1189]: New session 18 of user core. Sep 13 00:44:26.649930 systemd[1]: Started session-18.scope. Sep 13 00:44:27.306646 sshd[3491]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:27.309538 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:42306.service. Sep 13 00:44:27.310027 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:42304.service: Deactivated successfully. Sep 13 00:44:27.310877 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:44:27.313913 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:44:27.314879 systemd-logind[1189]: Removed session 18. Sep 13 00:44:27.350496 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 42306 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:27.352115 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:27.355871 systemd-logind[1189]: New session 19 of user core. Sep 13 00:44:27.356759 systemd[1]: Started session-19.scope. Sep 13 00:44:27.659008 sshd[3514]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:27.663058 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:42312.service. Sep 13 00:44:27.663511 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:42306.service: Deactivated successfully. Sep 13 00:44:27.664657 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:44:27.665219 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:44:27.666062 systemd-logind[1189]: Removed session 19. Sep 13 00:44:27.693571 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 42312 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:27.694789 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:27.698297 systemd-logind[1189]: New session 20 of user core. Sep 13 00:44:27.699097 systemd[1]: Started session-20.scope. Sep 13 00:44:27.807282 sshd[3525]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:27.809845 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:42312.service: Deactivated successfully. Sep 13 00:44:27.810636 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:44:27.811419 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:44:27.812259 systemd-logind[1189]: Removed session 20. Sep 13 00:44:32.811592 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:57312.service. Sep 13 00:44:32.840389 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 57312 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:32.841313 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:32.844330 systemd-logind[1189]: New session 21 of user core. Sep 13 00:44:32.845093 systemd[1]: Started session-21.scope. Sep 13 00:44:32.959791 sshd[3540]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:32.963002 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:57312.service: Deactivated successfully. Sep 13 00:44:32.963915 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:44:32.964480 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:44:32.965255 systemd-logind[1189]: Removed session 21. Sep 13 00:44:37.965271 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:57314.service. Sep 13 00:44:37.998102 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 57314 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:37.999381 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:38.004133 systemd-logind[1189]: New session 22 of user core. Sep 13 00:44:38.005146 systemd[1]: Started session-22.scope. Sep 13 00:44:38.125057 sshd[3555]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:38.127967 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:57314.service: Deactivated successfully. Sep 13 00:44:38.128719 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:44:38.129504 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:44:38.130314 systemd-logind[1189]: Removed session 22. Sep 13 00:44:39.921481 kubelet[1914]: E0913 00:44:39.921420 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:43.130935 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:35442.service. Sep 13 00:44:43.164776 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 35442 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:43.166230 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:43.170312 systemd-logind[1189]: New session 23 of user core. Sep 13 00:44:43.171524 systemd[1]: Started session-23.scope. Sep 13 00:44:43.277930 sshd[3568]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:43.280916 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:35442.service: Deactivated successfully. Sep 13 00:44:43.281482 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:44:43.281970 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:44:43.283120 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:35444.service. Sep 13 00:44:43.283813 systemd-logind[1189]: Removed session 23. Sep 13 00:44:43.312407 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 35444 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:43.313690 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:43.317806 systemd-logind[1189]: New session 24 of user core. Sep 13 00:44:43.319066 systemd[1]: Started session-24.scope. Sep 13 00:44:44.664314 env[1201]: time="2025-09-13T00:44:44.663632282Z" level=info msg="StopContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" with timeout 30 (s)" Sep 13 00:44:44.664866 env[1201]: time="2025-09-13T00:44:44.664844701Z" level=info msg="Stop container \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" with signal terminated" Sep 13 00:44:44.675658 systemd[1]: cri-containerd-299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef.scope: Deactivated successfully. Sep 13 00:44:44.690698 env[1201]: time="2025-09-13T00:44:44.690622651Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:44:44.693378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef-rootfs.mount: Deactivated successfully. Sep 13 00:44:44.697322 env[1201]: time="2025-09-13T00:44:44.697270268Z" level=info msg="StopContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" with timeout 2 (s)" Sep 13 00:44:44.697586 env[1201]: time="2025-09-13T00:44:44.697552726Z" level=info msg="Stop container \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" with signal terminated" Sep 13 00:44:44.703440 systemd-networkd[1023]: lxc_health: Link DOWN Sep 13 00:44:44.703449 systemd-networkd[1023]: lxc_health: Lost carrier Sep 13 00:44:44.704353 env[1201]: time="2025-09-13T00:44:44.703944967Z" level=info msg="shim disconnected" id=299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef Sep 13 00:44:44.704353 env[1201]: time="2025-09-13T00:44:44.703995793Z" level=warning msg="cleaning up after shim disconnected" id=299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef namespace=k8s.io Sep 13 00:44:44.704353 env[1201]: time="2025-09-13T00:44:44.704007396Z" level=info msg="cleaning up dead shim" Sep 13 00:44:44.711560 env[1201]: time="2025-09-13T00:44:44.711509459Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Sep 13 00:44:44.714868 env[1201]: time="2025-09-13T00:44:44.714822438Z" level=info msg="StopContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" returns successfully" Sep 13 00:44:44.716259 env[1201]: time="2025-09-13T00:44:44.716213947Z" level=info msg="StopPodSandbox for \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\"" Sep 13 00:44:44.716478 env[1201]: time="2025-09-13T00:44:44.716311242Z" level=info msg="Container to stop \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.718284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc-shm.mount: Deactivated successfully. Sep 13 00:44:44.732112 systemd[1]: cri-containerd-7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc.scope: Deactivated successfully. Sep 13 00:44:44.735369 systemd[1]: cri-containerd-c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db.scope: Deactivated successfully. Sep 13 00:44:44.735735 systemd[1]: cri-containerd-c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db.scope: Consumed 6.556s CPU time. Sep 13 00:44:44.752957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db-rootfs.mount: Deactivated successfully. Sep 13 00:44:44.758894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc-rootfs.mount: Deactivated successfully. Sep 13 00:44:44.760076 env[1201]: time="2025-09-13T00:44:44.759997539Z" level=info msg="shim disconnected" id=c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db Sep 13 00:44:44.760076 env[1201]: time="2025-09-13T00:44:44.760072943Z" level=warning msg="cleaning up after shim disconnected" id=c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db namespace=k8s.io Sep 13 00:44:44.760525 env[1201]: time="2025-09-13T00:44:44.760086589Z" level=info msg="cleaning up dead shim" Sep 13 00:44:44.765222 env[1201]: time="2025-09-13T00:44:44.765167302Z" level=info msg="shim disconnected" id=7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc Sep 13 00:44:44.765222 env[1201]: time="2025-09-13T00:44:44.765215435Z" level=warning msg="cleaning up after shim disconnected" id=7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc namespace=k8s.io Sep 13 00:44:44.765380 env[1201]: time="2025-09-13T00:44:44.765225003Z" level=info msg="cleaning up dead shim" Sep 13 00:44:44.770177 env[1201]: time="2025-09-13T00:44:44.770133087Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3680 runtime=io.containerd.runc.v2\n" Sep 13 00:44:44.772707 env[1201]: time="2025-09-13T00:44:44.772671400Z" level=info msg="StopContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" returns successfully" Sep 13 00:44:44.772835 env[1201]: time="2025-09-13T00:44:44.772790998Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n" Sep 13 00:44:44.773172 env[1201]: time="2025-09-13T00:44:44.773144591Z" level=info msg="TearDown network for sandbox \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\" successfully" Sep 13 00:44:44.773204 env[1201]: time="2025-09-13T00:44:44.773173867Z" level=info msg="StopPodSandbox for \"7ff65d58f9491ff6063e78343d86620bf22e6d3f2a67c95bd72f5bb051c973dc\" returns successfully" Sep 13 00:44:44.773944 env[1201]: time="2025-09-13T00:44:44.773905009Z" level=info msg="StopPodSandbox for \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\"" Sep 13 00:44:44.774495 env[1201]: time="2025-09-13T00:44:44.774455016Z" level=info msg="Container to stop \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.774534 env[1201]: time="2025-09-13T00:44:44.774494391Z" level=info msg="Container to stop \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.774534 env[1201]: time="2025-09-13T00:44:44.774506354Z" level=info msg="Container to stop \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.774534 env[1201]: time="2025-09-13T00:44:44.774516684Z" level=info msg="Container to stop \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.774534 env[1201]: time="2025-09-13T00:44:44.774527865Z" level=info msg="Container to stop \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:44:44.781287 systemd[1]: cri-containerd-212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42.scope: Deactivated successfully. Sep 13 00:44:44.804215 env[1201]: time="2025-09-13T00:44:44.804143433Z" level=info msg="shim disconnected" id=212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42 Sep 13 00:44:44.804215 env[1201]: time="2025-09-13T00:44:44.804215991Z" level=warning msg="cleaning up after shim disconnected" id=212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42 namespace=k8s.io Sep 13 00:44:44.804485 env[1201]: time="2025-09-13T00:44:44.804229596Z" level=info msg="cleaning up dead shim" Sep 13 00:44:44.814117 kubelet[1914]: I0913 00:44:44.811263 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4szp\" (UniqueName: \"kubernetes.io/projected/a1f82b3a-8528-49a1-809f-d6f516e03c01-kube-api-access-r4szp\") pod \"a1f82b3a-8528-49a1-809f-d6f516e03c01\" (UID: \"a1f82b3a-8528-49a1-809f-d6f516e03c01\") " Sep 13 00:44:44.814117 kubelet[1914]: I0913 00:44:44.811347 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1f82b3a-8528-49a1-809f-d6f516e03c01-cilium-config-path\") pod \"a1f82b3a-8528-49a1-809f-d6f516e03c01\" (UID: \"a1f82b3a-8528-49a1-809f-d6f516e03c01\") " Sep 13 00:44:44.814117 kubelet[1914]: I0913 00:44:44.814052 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1f82b3a-8528-49a1-809f-d6f516e03c01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1f82b3a-8528-49a1-809f-d6f516e03c01" (UID: "a1f82b3a-8528-49a1-809f-d6f516e03c01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:44:44.815038 env[1201]: time="2025-09-13T00:44:44.812491607Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\n" Sep 13 00:44:44.815038 env[1201]: time="2025-09-13T00:44:44.812877152Z" level=info msg="TearDown network for sandbox \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" successfully" Sep 13 00:44:44.815038 env[1201]: time="2025-09-13T00:44:44.812915785Z" level=info msg="StopPodSandbox for \"212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42\" returns successfully" Sep 13 00:44:44.815154 kubelet[1914]: I0913 00:44:44.814721 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f82b3a-8528-49a1-809f-d6f516e03c01-kube-api-access-r4szp" (OuterVolumeSpecName: "kube-api-access-r4szp") pod "a1f82b3a-8528-49a1-809f-d6f516e03c01" (UID: "a1f82b3a-8528-49a1-809f-d6f516e03c01"). InnerVolumeSpecName "kube-api-access-r4szp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:44:44.912288 kubelet[1914]: I0913 00:44:44.912190 1914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4szp\" (UniqueName: \"kubernetes.io/projected/a1f82b3a-8528-49a1-809f-d6f516e03c01-kube-api-access-r4szp\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:44.912288 kubelet[1914]: I0913 00:44:44.912250 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1f82b3a-8528-49a1-809f-d6f516e03c01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:44.921881 kubelet[1914]: E0913 00:44:44.921785 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:45.012710 kubelet[1914]: I0913 00:44:45.012637 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-run\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012710 kubelet[1914]: I0913 00:44:45.012704 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmh9z\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-kube-api-access-fmh9z\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012754 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-net\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012786 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-bpf-maps\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012789 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012817 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22bd6160-3044-44ca-9cec-d99d44bc2424-clustermesh-secrets\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012843 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-lib-modules\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.012996 kubelet[1914]: I0913 00:44:45.012867 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-hostproc\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013264 kubelet[1914]: I0913 00:44:45.012868 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013264 kubelet[1914]: I0913 00:44:45.012883 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-etc-cni-netd\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013264 kubelet[1914]: I0913 00:44:45.012890 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013264 kubelet[1914]: I0913 00:44:45.012900 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-hubble-tls\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013264 kubelet[1914]: I0913 00:44:45.012909 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.012926 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-config-path\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.012942 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-cgroup\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.012962 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-xtables-lock\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.012982 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cni-path\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.012998 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-kernel\") pod \"22bd6160-3044-44ca-9cec-d99d44bc2424\" (UID: \"22bd6160-3044-44ca-9cec-d99d44bc2424\") " Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.013052 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.013640 kubelet[1914]: I0913 00:44:45.013067 1914 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.013942 kubelet[1914]: I0913 00:44:45.013087 1914 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.013942 kubelet[1914]: I0913 00:44:45.013134 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013942 kubelet[1914]: I0913 00:44:45.013179 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013942 kubelet[1914]: I0913 00:44:45.013204 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.013942 kubelet[1914]: I0913 00:44:45.013229 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cni-path" (OuterVolumeSpecName: "cni-path") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.014151 kubelet[1914]: I0913 00:44:45.013251 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.014151 kubelet[1914]: I0913 00:44:45.013287 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-hostproc" (OuterVolumeSpecName: "hostproc") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:45.015818 kubelet[1914]: I0913 00:44:45.015778 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:44:45.016651 kubelet[1914]: I0913 00:44:45.016615 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22bd6160-3044-44ca-9cec-d99d44bc2424-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:44:45.016714 kubelet[1914]: I0913 00:44:45.016677 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:44:45.017005 kubelet[1914]: I0913 00:44:45.016982 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-kube-api-access-fmh9z" (OuterVolumeSpecName: "kube-api-access-fmh9z") pod "22bd6160-3044-44ca-9cec-d99d44bc2424" (UID: "22bd6160-3044-44ca-9cec-d99d44bc2424"). InnerVolumeSpecName "kube-api-access-fmh9z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:44:45.113407 kubelet[1914]: I0913 00:44:45.113341 1914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmh9z\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-kube-api-access-fmh9z\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113407 kubelet[1914]: I0913 00:44:45.113390 1914 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22bd6160-3044-44ca-9cec-d99d44bc2424-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113407 kubelet[1914]: I0913 00:44:45.113400 1914 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113407 kubelet[1914]: I0913 00:44:45.113411 1914 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113407 kubelet[1914]: I0913 00:44:45.113419 1914 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113426 1914 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22bd6160-3044-44ca-9cec-d99d44bc2424-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113433 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113440 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113447 1914 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113454 1914 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.113722 kubelet[1914]: I0913 00:44:45.113487 1914 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22bd6160-3044-44ca-9cec-d99d44bc2424-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:45.349903 kubelet[1914]: I0913 00:44:45.349870 1914 scope.go:117] "RemoveContainer" containerID="299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef" Sep 13 00:44:45.351608 env[1201]: time="2025-09-13T00:44:45.351147158Z" level=info msg="RemoveContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\"" Sep 13 00:44:45.355068 systemd[1]: Removed slice kubepods-besteffort-poda1f82b3a_8528_49a1_809f_d6f516e03c01.slice. Sep 13 00:44:45.355932 env[1201]: time="2025-09-13T00:44:45.355846672Z" level=info msg="RemoveContainer for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" returns successfully" Sep 13 00:44:45.356183 kubelet[1914]: I0913 00:44:45.356158 1914 scope.go:117] "RemoveContainer" containerID="299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef" Sep 13 00:44:45.356613 env[1201]: time="2025-09-13T00:44:45.356519222Z" level=error msg="ContainerStatus for \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\": not found" Sep 13 00:44:45.357000 kubelet[1914]: E0913 00:44:45.356886 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\": not found" containerID="299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef" Sep 13 00:44:45.357227 kubelet[1914]: I0913 00:44:45.357171 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef"} err="failed to get container status \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"299d66a491dee1d62990a6d7a4e282f9336b588b85796f5d3daba6c7519a10ef\": not found" Sep 13 00:44:45.357227 kubelet[1914]: I0913 00:44:45.357225 1914 scope.go:117] "RemoveContainer" containerID="c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db" Sep 13 00:44:45.359044 env[1201]: time="2025-09-13T00:44:45.358809251Z" level=info msg="RemoveContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\"" Sep 13 00:44:45.359093 systemd[1]: Removed slice kubepods-burstable-pod22bd6160_3044_44ca_9cec_d99d44bc2424.slice. Sep 13 00:44:45.359187 systemd[1]: kubepods-burstable-pod22bd6160_3044_44ca_9cec_d99d44bc2424.slice: Consumed 6.662s CPU time. Sep 13 00:44:45.364336 env[1201]: time="2025-09-13T00:44:45.363751758Z" level=info msg="RemoveContainer for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" returns successfully" Sep 13 00:44:45.364601 kubelet[1914]: I0913 00:44:45.364075 1914 scope.go:117] "RemoveContainer" containerID="d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8" Sep 13 00:44:45.365716 env[1201]: time="2025-09-13T00:44:45.365677924Z" level=info msg="RemoveContainer for \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\"" Sep 13 00:44:45.369699 env[1201]: time="2025-09-13T00:44:45.369652428Z" level=info msg="RemoveContainer for \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\" returns successfully" Sep 13 00:44:45.370354 kubelet[1914]: I0913 00:44:45.370332 1914 scope.go:117] "RemoveContainer" containerID="63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f" Sep 13 00:44:45.371368 env[1201]: time="2025-09-13T00:44:45.371338888Z" level=info msg="RemoveContainer for \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\"" Sep 13 00:44:45.375757 env[1201]: time="2025-09-13T00:44:45.375725788Z" level=info msg="RemoveContainer for \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\" returns successfully" Sep 13 00:44:45.375892 kubelet[1914]: I0913 00:44:45.375868 1914 scope.go:117] "RemoveContainer" containerID="1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b" Sep 13 00:44:45.376954 env[1201]: time="2025-09-13T00:44:45.376903469Z" level=info msg="RemoveContainer for \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\"" Sep 13 00:44:45.380983 env[1201]: time="2025-09-13T00:44:45.380943398Z" level=info msg="RemoveContainer for \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\" returns successfully" Sep 13 00:44:45.381340 kubelet[1914]: I0913 00:44:45.381152 1914 scope.go:117] "RemoveContainer" containerID="b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc" Sep 13 00:44:45.383129 env[1201]: time="2025-09-13T00:44:45.383099933Z" level=info msg="RemoveContainer for \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\"" Sep 13 00:44:45.386266 env[1201]: time="2025-09-13T00:44:45.386227726Z" level=info msg="RemoveContainer for \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\" returns successfully" Sep 13 00:44:45.386378 kubelet[1914]: I0913 00:44:45.386360 1914 scope.go:117] "RemoveContainer" containerID="c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db" Sep 13 00:44:45.386636 env[1201]: time="2025-09-13T00:44:45.386546813Z" level=error msg="ContainerStatus for \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\": not found" Sep 13 00:44:45.386745 kubelet[1914]: E0913 00:44:45.386724 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\": not found" containerID="c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db" Sep 13 00:44:45.386798 kubelet[1914]: I0913 00:44:45.386750 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db"} err="failed to get container status \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0ffa90f113f75755bb0674acab1b575d864e1aa7640556b30476fa7f922c6db\": not found" Sep 13 00:44:45.386798 kubelet[1914]: I0913 00:44:45.386767 1914 scope.go:117] "RemoveContainer" containerID="d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8" Sep 13 00:44:45.386985 env[1201]: time="2025-09-13T00:44:45.386927357Z" level=error msg="ContainerStatus for \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\": not found" Sep 13 00:44:45.387926 kubelet[1914]: E0913 00:44:45.387894 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\": not found" containerID="d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8" Sep 13 00:44:45.387926 kubelet[1914]: I0913 00:44:45.387917 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8"} err="failed to get container status \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8291979712dac88401c086768c11190cf66e6539bd51417307d34d19fab80c8\": not found" Sep 13 00:44:45.387926 kubelet[1914]: I0913 00:44:45.387929 1914 scope.go:117] "RemoveContainer" containerID="63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f" Sep 13 00:44:45.388123 env[1201]: time="2025-09-13T00:44:45.388078138Z" level=error msg="ContainerStatus for \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\": not found" Sep 13 00:44:45.388211 kubelet[1914]: E0913 00:44:45.388188 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\": not found" containerID="63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f" Sep 13 00:44:45.388263 kubelet[1914]: I0913 00:44:45.388213 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f"} err="failed to get container status \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"63618039ff958548f26879f4dacd4bfd67001dadd10015d46b830489f90d8f3f\": not found" Sep 13 00:44:45.388263 kubelet[1914]: I0913 00:44:45.388225 1914 scope.go:117] "RemoveContainer" containerID="1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b" Sep 13 00:44:45.388414 env[1201]: time="2025-09-13T00:44:45.388378038Z" level=error msg="ContainerStatus for \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\": not found" Sep 13 00:44:45.388621 kubelet[1914]: E0913 00:44:45.388586 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\": not found" containerID="1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b" Sep 13 00:44:45.388688 kubelet[1914]: I0913 00:44:45.388626 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b"} err="failed to get container status \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f92f62fb635d3a1a11be15ae5f557b8131f1d58cee2e6ea1de6c5f1e59e881b\": not found" Sep 13 00:44:45.388688 kubelet[1914]: I0913 00:44:45.388643 1914 scope.go:117] "RemoveContainer" containerID="b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc" Sep 13 00:44:45.388824 env[1201]: time="2025-09-13T00:44:45.388778801Z" level=error msg="ContainerStatus for \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\": not found" Sep 13 00:44:45.388908 kubelet[1914]: E0913 00:44:45.388894 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\": not found" containerID="b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc" Sep 13 00:44:45.388951 kubelet[1914]: I0913 00:44:45.388912 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc"} err="failed to get container status \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b101ea75a0b9567ee28cb0bfcd77128fa4e59b5ac01ed102362f24f4d1f184bc\": not found" Sep 13 00:44:45.671352 systemd[1]: var-lib-kubelet-pods-a1f82b3a\x2d8528\x2d49a1\x2d809f\x2dd6f516e03c01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4szp.mount: Deactivated successfully. Sep 13 00:44:45.671512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42-rootfs.mount: Deactivated successfully. Sep 13 00:44:45.671651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-212b8d1d689d3af1be51b5d8da9c18609296680c1ca4f85b8fdd10d28b12bf42-shm.mount: Deactivated successfully. Sep 13 00:44:45.671757 systemd[1]: var-lib-kubelet-pods-22bd6160\x2d3044\x2d44ca\x2d9cec\x2dd99d44bc2424-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmh9z.mount: Deactivated successfully. Sep 13 00:44:45.671865 systemd[1]: var-lib-kubelet-pods-22bd6160\x2d3044\x2d44ca\x2d9cec\x2dd99d44bc2424-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:45.671953 systemd[1]: var-lib-kubelet-pods-22bd6160\x2d3044\x2d44ca\x2d9cec\x2dd99d44bc2424-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:44:45.923782 kubelet[1914]: I0913 00:44:45.923665 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22bd6160-3044-44ca-9cec-d99d44bc2424" path="/var/lib/kubelet/pods/22bd6160-3044-44ca-9cec-d99d44bc2424/volumes" Sep 13 00:44:45.924306 kubelet[1914]: I0913 00:44:45.924243 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f82b3a-8528-49a1-809f-d6f516e03c01" path="/var/lib/kubelet/pods/a1f82b3a-8528-49a1-809f-d6f516e03c01/volumes" Sep 13 00:44:46.630483 sshd[3581]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:46.634994 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:35444.service: Deactivated successfully. Sep 13 00:44:46.635783 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:44:46.636652 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:44:46.638517 systemd[1]: Started sshd@24-10.0.0.24:22-10.0.0.1:35454.service. Sep 13 00:44:46.639763 systemd-logind[1189]: Removed session 24. Sep 13 00:44:46.674687 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:46.675907 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:46.679915 systemd-logind[1189]: New session 25 of user core. Sep 13 00:44:46.680715 systemd[1]: Started session-25.scope. Sep 13 00:44:47.112324 sshd[3744]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:47.115709 systemd[1]: Started sshd@25-10.0.0.24:22-10.0.0.1:35458.service. Sep 13 00:44:47.119329 systemd[1]: sshd@24-10.0.0.24:22-10.0.0.1:35454.service: Deactivated successfully. Sep 13 00:44:47.120230 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:44:47.121780 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:44:47.122959 systemd-logind[1189]: Removed session 25. Sep 13 00:44:47.142352 systemd[1]: Created slice kubepods-burstable-pod1c2f9fe8_751c_429f_bd72_e2f6145ede50.slice. Sep 13 00:44:47.154577 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 35458 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:47.156259 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:47.163204 systemd[1]: Started session-26.scope. Sep 13 00:44:47.164376 systemd-logind[1189]: New session 26 of user core. Sep 13 00:44:47.286018 sshd[3755]: pam_unix(sshd:session): session closed for user core Sep 13 00:44:47.290866 systemd[1]: sshd@25-10.0.0.24:22-10.0.0.1:35458.service: Deactivated successfully. Sep 13 00:44:47.291787 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:44:47.293150 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:44:47.294505 systemd[1]: Started sshd@26-10.0.0.24:22-10.0.0.1:35466.service. Sep 13 00:44:47.295823 systemd-logind[1189]: Removed session 26. Sep 13 00:44:47.300873 kubelet[1914]: E0913 00:44:47.300775 1914 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-7bp2x lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-vrjp7" podUID="1c2f9fe8-751c-429f-bd72-e2f6145ede50" Sep 13 00:44:47.325228 kubelet[1914]: I0913 00:44:47.325189 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-config-path\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325228 kubelet[1914]: I0913 00:44:47.325230 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-run\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325280 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-bpf-maps\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325314 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hostproc\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325334 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-etc-cni-netd\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325351 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-clustermesh-secrets\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325369 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hubble-tls\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325446 kubelet[1914]: I0913 00:44:47.325391 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp2x\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-kube-api-access-7bp2x\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325413 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-net\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325427 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-cgroup\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325438 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-xtables-lock\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325454 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-ipsec-secrets\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325500 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cni-path\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325609 kubelet[1914]: I0913 00:44:47.325537 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-lib-modules\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.325738 kubelet[1914]: I0913 00:44:47.325553 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-kernel\") pod \"cilium-vrjp7\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " pod="kube-system/cilium-vrjp7" Sep 13 00:44:47.327251 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 35466 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:44:47.328337 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:44:47.331553 systemd-logind[1189]: New session 27 of user core. Sep 13 00:44:47.332337 systemd[1]: Started session-27.scope. Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527149 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-run\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527185 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-bpf-maps\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527201 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-etc-cni-netd\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527223 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-xtables-lock\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527244 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hostproc\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527332 kubelet[1914]: I0913 00:44:47.527266 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bp2x\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-kube-api-access-7bp2x\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527268 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527266 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527299 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cni-path\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527343 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-config-path\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527368 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hubble-tls\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527708 kubelet[1914]: I0913 00:44:47.527394 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-clustermesh-secrets\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527420 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-ipsec-secrets\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527438 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-lib-modules\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527483 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-net\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527505 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-kernel\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527526 1914 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-cgroup\") pod \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\" (UID: \"1c2f9fe8-751c-429f-bd72-e2f6145ede50\") " Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527568 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.527857 kubelet[1914]: I0913 00:44:47.527581 1914 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.528023 kubelet[1914]: I0913 00:44:47.527310 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528023 kubelet[1914]: I0913 00:44:47.527320 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528023 kubelet[1914]: I0913 00:44:47.527325 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528023 kubelet[1914]: I0913 00:44:47.527337 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528023 kubelet[1914]: I0913 00:44:47.527606 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528142 kubelet[1914]: I0913 00:44:47.527951 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528311 kubelet[1914]: I0913 00:44:47.528246 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.528527 kubelet[1914]: I0913 00:44:47.528436 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:44:47.529605 kubelet[1914]: I0913 00:44:47.529562 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:44:47.531992 systemd[1]: var-lib-kubelet-pods-1c2f9fe8\x2d751c\x2d429f\x2dbd72\x2de2f6145ede50-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:44:47.532137 systemd[1]: var-lib-kubelet-pods-1c2f9fe8\x2d751c\x2d429f\x2dbd72\x2de2f6145ede50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7bp2x.mount: Deactivated successfully. Sep 13 00:44:47.534196 systemd[1]: var-lib-kubelet-pods-1c2f9fe8\x2d751c\x2d429f\x2dbd72\x2de2f6145ede50-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:47.534272 systemd[1]: var-lib-kubelet-pods-1c2f9fe8\x2d751c\x2d429f\x2dbd72\x2de2f6145ede50-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:44:47.535835 kubelet[1914]: I0913 00:44:47.535808 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-kube-api-access-7bp2x" (OuterVolumeSpecName: "kube-api-access-7bp2x") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "kube-api-access-7bp2x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:44:47.535992 kubelet[1914]: I0913 00:44:47.535975 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:44:47.536174 kubelet[1914]: I0913 00:44:47.536134 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:44:47.536605 kubelet[1914]: I0913 00:44:47.536537 1914 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1c2f9fe8-751c-429f-bd72-e2f6145ede50" (UID: "1c2f9fe8-751c-429f-bd72-e2f6145ede50"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:44:47.628703 kubelet[1914]: I0913 00:44:47.628659 1914 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628703 kubelet[1914]: I0913 00:44:47.628687 1914 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bp2x\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-kube-api-access-7bp2x\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628703 kubelet[1914]: I0913 00:44:47.628698 1914 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628703 kubelet[1914]: I0913 00:44:47.628705 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628703 kubelet[1914]: I0913 00:44:47.628711 1914 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c2f9fe8-751c-429f-bd72-e2f6145ede50-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628718 1914 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628725 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628732 1914 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628738 1914 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628745 1914 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628751 1914 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628770 1914 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.628940 kubelet[1914]: I0913 00:44:47.628779 1914 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c2f9fe8-751c-429f-bd72-e2f6145ede50-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:44:47.925997 systemd[1]: Removed slice kubepods-burstable-pod1c2f9fe8_751c_429f_bd72_e2f6145ede50.slice. Sep 13 00:44:48.320604 kubelet[1914]: E0913 00:44:48.320549 1914 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:44:48.402164 systemd[1]: Created slice kubepods-burstable-pod717f3121_dcba_439b_96cd_6cdfde0c7535.slice. Sep 13 00:44:48.432661 kubelet[1914]: I0913 00:44:48.432620 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-cilium-cgroup\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432661 kubelet[1914]: I0913 00:44:48.432657 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-cilium-run\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432661 kubelet[1914]: I0913 00:44:48.432679 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-etc-cni-netd\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432695 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-bpf-maps\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432709 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-lib-modules\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432732 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/717f3121-dcba-439b-96cd-6cdfde0c7535-cilium-ipsec-secrets\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432746 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-cni-path\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432760 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/717f3121-dcba-439b-96cd-6cdfde0c7535-hubble-tls\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.432928 kubelet[1914]: I0913 00:44:48.432772 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qtcn\" (UniqueName: \"kubernetes.io/projected/717f3121-dcba-439b-96cd-6cdfde0c7535-kube-api-access-2qtcn\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433075 kubelet[1914]: I0913 00:44:48.432841 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-xtables-lock\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433075 kubelet[1914]: I0913 00:44:48.432890 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/717f3121-dcba-439b-96cd-6cdfde0c7535-clustermesh-secrets\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433075 kubelet[1914]: I0913 00:44:48.432911 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/717f3121-dcba-439b-96cd-6cdfde0c7535-cilium-config-path\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433075 kubelet[1914]: I0913 00:44:48.432927 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-host-proc-sys-net\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433075 kubelet[1914]: I0913 00:44:48.432945 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-host-proc-sys-kernel\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.433191 kubelet[1914]: I0913 00:44:48.432959 1914 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/717f3121-dcba-439b-96cd-6cdfde0c7535-hostproc\") pod \"cilium-bmvtv\" (UID: \"717f3121-dcba-439b-96cd-6cdfde0c7535\") " pod="kube-system/cilium-bmvtv" Sep 13 00:44:48.705834 kubelet[1914]: E0913 00:44:48.704989 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:48.706020 env[1201]: time="2025-09-13T00:44:48.705803104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmvtv,Uid:717f3121-dcba-439b-96cd-6cdfde0c7535,Namespace:kube-system,Attempt:0,}" Sep 13 00:44:48.751808 env[1201]: time="2025-09-13T00:44:48.751694246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:48.751808 env[1201]: time="2025-09-13T00:44:48.751807081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:48.752169 env[1201]: time="2025-09-13T00:44:48.751830955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:48.752169 env[1201]: time="2025-09-13T00:44:48.751980830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218 pid=3799 runtime=io.containerd.runc.v2 Sep 13 00:44:48.767049 systemd[1]: Started cri-containerd-bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218.scope. Sep 13 00:44:48.803753 env[1201]: time="2025-09-13T00:44:48.803688260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmvtv,Uid:717f3121-dcba-439b-96cd-6cdfde0c7535,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\"" Sep 13 00:44:48.804431 kubelet[1914]: E0913 00:44:48.804389 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:48.809724 env[1201]: time="2025-09-13T00:44:48.809678419Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:44:48.822031 env[1201]: time="2025-09-13T00:44:48.821960621Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e\"" Sep 13 00:44:48.822780 env[1201]: time="2025-09-13T00:44:48.822734081Z" level=info msg="StartContainer for \"3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e\"" Sep 13 00:44:48.838350 systemd[1]: Started cri-containerd-3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e.scope. Sep 13 00:44:48.868412 env[1201]: time="2025-09-13T00:44:48.868355230Z" level=info msg="StartContainer for \"3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e\" returns successfully" Sep 13 00:44:48.877102 systemd[1]: cri-containerd-3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e.scope: Deactivated successfully. Sep 13 00:44:48.913892 env[1201]: time="2025-09-13T00:44:48.913832715Z" level=info msg="shim disconnected" id=3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e Sep 13 00:44:48.913892 env[1201]: time="2025-09-13T00:44:48.913885165Z" level=warning msg="cleaning up after shim disconnected" id=3e4ddefd0a700e743233facc4f64d1693997190dedf5d3ac67a70f8dce43c56e namespace=k8s.io Sep 13 00:44:48.913892 env[1201]: time="2025-09-13T00:44:48.913894392Z" level=info msg="cleaning up dead shim" Sep 13 00:44:48.921419 kubelet[1914]: E0913 00:44:48.921366 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:48.922918 env[1201]: time="2025-09-13T00:44:48.922849288Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3882 runtime=io.containerd.runc.v2\n" Sep 13 00:44:49.364895 kubelet[1914]: E0913 00:44:49.364826 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:49.369997 env[1201]: time="2025-09-13T00:44:49.369951871Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:44:49.383856 env[1201]: time="2025-09-13T00:44:49.383778199Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54\"" Sep 13 00:44:49.384357 env[1201]: time="2025-09-13T00:44:49.384322634Z" level=info msg="StartContainer for \"2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54\"" Sep 13 00:44:49.400011 systemd[1]: Started cri-containerd-2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54.scope. Sep 13 00:44:49.428707 env[1201]: time="2025-09-13T00:44:49.428642691Z" level=info msg="StartContainer for \"2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54\" returns successfully" Sep 13 00:44:49.435759 systemd[1]: cri-containerd-2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54.scope: Deactivated successfully. Sep 13 00:44:49.452191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54-rootfs.mount: Deactivated successfully. Sep 13 00:44:49.458764 env[1201]: time="2025-09-13T00:44:49.458686554Z" level=info msg="shim disconnected" id=2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54 Sep 13 00:44:49.458764 env[1201]: time="2025-09-13T00:44:49.458745487Z" level=warning msg="cleaning up after shim disconnected" id=2f345cab65e699438905b18ad21cc8ed607af43d9ead82af521c21d1ca305b54 namespace=k8s.io Sep 13 00:44:49.458764 env[1201]: time="2025-09-13T00:44:49.458755595Z" level=info msg="cleaning up dead shim" Sep 13 00:44:49.467892 env[1201]: time="2025-09-13T00:44:49.467803573Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3946 runtime=io.containerd.runc.v2\n" Sep 13 00:44:49.921749 kubelet[1914]: E0913 00:44:49.921617 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:49.924801 kubelet[1914]: I0913 00:44:49.924761 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2f9fe8-751c-429f-bd72-e2f6145ede50" path="/var/lib/kubelet/pods/1c2f9fe8-751c-429f-bd72-e2f6145ede50/volumes" Sep 13 00:44:50.368416 kubelet[1914]: E0913 00:44:50.368362 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:50.396433 env[1201]: time="2025-09-13T00:44:50.396340924Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:44:50.411956 env[1201]: time="2025-09-13T00:44:50.411896179Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e\"" Sep 13 00:44:50.412595 env[1201]: time="2025-09-13T00:44:50.412546325Z" level=info msg="StartContainer for \"065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e\"" Sep 13 00:44:50.430818 systemd[1]: Started cri-containerd-065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e.scope. Sep 13 00:44:50.462633 systemd[1]: cri-containerd-065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e.scope: Deactivated successfully. Sep 13 00:44:50.463548 env[1201]: time="2025-09-13T00:44:50.463437941Z" level=info msg="StartContainer for \"065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e\" returns successfully" Sep 13 00:44:50.481669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e-rootfs.mount: Deactivated successfully. Sep 13 00:44:50.486354 env[1201]: time="2025-09-13T00:44:50.486295542Z" level=info msg="shim disconnected" id=065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e Sep 13 00:44:50.486354 env[1201]: time="2025-09-13T00:44:50.486353672Z" level=warning msg="cleaning up after shim disconnected" id=065dd3c25fd1904d91dd8d96452a2275372384a23916a8b95cf7d8ed00256f2e namespace=k8s.io Sep 13 00:44:50.486536 env[1201]: time="2025-09-13T00:44:50.486364242Z" level=info msg="cleaning up dead shim" Sep 13 00:44:50.493532 env[1201]: time="2025-09-13T00:44:50.493399409Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" Sep 13 00:44:50.639373 kubelet[1914]: I0913 00:44:50.639196 1914 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:44:50Z","lastTransitionTime":"2025-09-13T00:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:44:51.372815 kubelet[1914]: E0913 00:44:51.372770 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:51.377125 env[1201]: time="2025-09-13T00:44:51.377071132Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:44:51.389239 env[1201]: time="2025-09-13T00:44:51.389174531Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5\"" Sep 13 00:44:51.389763 env[1201]: time="2025-09-13T00:44:51.389739805Z" level=info msg="StartContainer for \"100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5\"" Sep 13 00:44:51.405691 systemd[1]: Started cri-containerd-100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5.scope. Sep 13 00:44:51.425977 systemd[1]: cri-containerd-100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5.scope: Deactivated successfully. Sep 13 00:44:51.427449 env[1201]: time="2025-09-13T00:44:51.427381073Z" level=info msg="StartContainer for \"100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5\" returns successfully" Sep 13 00:44:51.443069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5-rootfs.mount: Deactivated successfully. Sep 13 00:44:51.447854 env[1201]: time="2025-09-13T00:44:51.447805028Z" level=info msg="shim disconnected" id=100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5 Sep 13 00:44:51.447854 env[1201]: time="2025-09-13T00:44:51.447859180Z" level=warning msg="cleaning up after shim disconnected" id=100d03fa05cf6251d75d43b9cf3a21924f657c14acef92f3ec3c9870be6107f5 namespace=k8s.io Sep 13 00:44:51.448020 env[1201]: time="2025-09-13T00:44:51.447875922Z" level=info msg="cleaning up dead shim" Sep 13 00:44:51.456656 env[1201]: time="2025-09-13T00:44:51.456620202Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:44:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4056 runtime=io.containerd.runc.v2\n" Sep 13 00:44:52.376573 kubelet[1914]: E0913 00:44:52.376537 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:52.387029 env[1201]: time="2025-09-13T00:44:52.381410574Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:44:52.396992 env[1201]: time="2025-09-13T00:44:52.396946063Z" level=info msg="CreateContainer within sandbox \"bdd0a8dc4d8cf2a71d9b41e3ce256e7e805781fd6f77babd8b4ddfa1664f3218\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f\"" Sep 13 00:44:52.397492 env[1201]: time="2025-09-13T00:44:52.397451332Z" level=info msg="StartContainer for \"ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f\"" Sep 13 00:44:52.416486 systemd[1]: Started cri-containerd-ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f.scope. Sep 13 00:44:52.444377 env[1201]: time="2025-09-13T00:44:52.444313709Z" level=info msg="StartContainer for \"ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f\" returns successfully" Sep 13 00:44:52.725496 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:44:53.383787 kubelet[1914]: E0913 00:44:53.383737 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:53.545898 systemd[1]: run-containerd-runc-k8s.io-ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f-runc.BOg07F.mount: Deactivated successfully. Sep 13 00:44:53.921524 kubelet[1914]: E0913 00:44:53.921485 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:53.921720 kubelet[1914]: E0913 00:44:53.921448 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:54.706688 kubelet[1914]: E0913 00:44:54.706644 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:55.444828 systemd-networkd[1023]: lxc_health: Link UP Sep 13 00:44:55.453929 systemd-networkd[1023]: lxc_health: Gained carrier Sep 13 00:44:55.454641 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:44:56.707722 kubelet[1914]: E0913 00:44:56.707679 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:56.724215 kubelet[1914]: I0913 00:44:56.724137 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bmvtv" podStartSLOduration=8.724111499 podStartE2EDuration="8.724111499s" podCreationTimestamp="2025-09-13 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:44:53.397948556 +0000 UTC m=+95.559258359" watchObservedRunningTime="2025-09-13 00:44:56.724111499 +0000 UTC m=+98.885421312" Sep 13 00:44:56.979692 systemd-networkd[1023]: lxc_health: Gained IPv6LL Sep 13 00:44:57.391305 kubelet[1914]: E0913 00:44:57.391240 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:44:58.398547 kubelet[1914]: E0913 00:44:58.398444 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:45:02.189105 systemd[1]: run-containerd-runc-k8s.io-ad31996ffda7c34ad656016939ee3f897dd9b347b033e23977034c54f4bb022f-runc.qOtPe3.mount: Deactivated successfully. Sep 13 00:45:02.234734 sshd[3769]: pam_unix(sshd:session): session closed for user core Sep 13 00:45:02.238221 systemd[1]: sshd@26-10.0.0.24:22-10.0.0.1:35466.service: Deactivated successfully. Sep 13 00:45:02.238985 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:45:02.239666 systemd-logind[1189]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:45:02.240386 systemd-logind[1189]: Removed session 27.