Sep 13 00:53:27.079830 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:53:27.079865 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:27.079873 kernel: BIOS-provided physical RAM map: Sep 13 00:53:27.079879 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:53:27.079884 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:53:27.079890 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:53:27.079897 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:53:27.079902 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:53:27.079909 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:53:27.079915 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:53:27.079921 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:53:27.079926 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:53:27.079932 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:53:27.079937 kernel: NX (Execute Disable) protection: active Sep 13 00:53:27.079946 kernel: SMBIOS 2.8 present. Sep 13 00:53:27.079952 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:53:27.079958 kernel: Hypervisor detected: KVM Sep 13 00:53:27.079964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:53:27.079972 kernel: kvm-clock: cpu 0, msr 4119f001, primary cpu clock Sep 13 00:53:27.079978 kernel: kvm-clock: using sched offset of 3719275922 cycles Sep 13 00:53:27.079985 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:53:27.079991 kernel: tsc: Detected 2794.750 MHz processor Sep 13 00:53:27.079997 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:53:27.080005 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:53:27.080011 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:53:27.080017 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:53:27.080024 kernel: Using GB pages for direct mapping Sep 13 00:53:27.080030 kernel: ACPI: Early table checksum verification disabled Sep 13 00:53:27.080036 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:53:27.080060 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080078 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080088 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080097 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:53:27.080103 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080109 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080115 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080122 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:27.080128 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:53:27.080134 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:53:27.080140 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:53:27.080150 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:53:27.080157 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:53:27.080163 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:53:27.080170 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:53:27.080176 kernel: No NUMA configuration found Sep 13 00:53:27.080183 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:53:27.080191 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:53:27.080212 kernel: Zone ranges: Sep 13 00:53:27.080219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:53:27.080225 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:53:27.080232 kernel: Normal empty Sep 13 00:53:27.080238 kernel: Movable zone start for each node Sep 13 00:53:27.080245 kernel: Early memory node ranges Sep 13 00:53:27.080251 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:53:27.080258 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:53:27.080266 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:53:27.080275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:53:27.080282 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:53:27.080288 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:53:27.080295 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:53:27.080301 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:53:27.080308 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:53:27.080314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:53:27.080321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:53:27.080327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:53:27.080337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:53:27.080349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:53:27.080355 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:53:27.080362 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:53:27.080368 kernel: TSC deadline timer available Sep 13 00:53:27.080375 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:53:27.080381 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:53:27.080388 kernel: kvm-guest: setup PV sched yield Sep 13 00:53:27.080394 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:53:27.080403 kernel: Booting paravirtualized kernel on KVM Sep 13 00:53:27.080410 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:53:27.080416 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:53:27.080423 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:53:27.080430 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:53:27.080436 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:53:27.080442 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:53:27.080449 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 13 00:53:27.080455 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:53:27.080463 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:53:27.080469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:53:27.080476 kernel: Policy zone: DMA32 Sep 13 00:53:27.080483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:27.080490 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:53:27.080504 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:53:27.080511 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:27.080517 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:53:27.080526 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 13 00:53:27.080533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:53:27.080540 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:53:27.080547 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:53:27.080554 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:53:27.080561 kernel: rcu: RCU event tracing is enabled. Sep 13 00:53:27.080567 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:53:27.080574 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:53:27.080580 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:53:27.080588 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:53:27.080595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:53:27.080602 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:53:27.080608 kernel: random: crng init done Sep 13 00:53:27.080614 kernel: Console: colour VGA+ 80x25 Sep 13 00:53:27.080621 kernel: printk: console [ttyS0] enabled Sep 13 00:53:27.080628 kernel: ACPI: Core revision 20210730 Sep 13 00:53:27.080634 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:53:27.080641 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:53:27.080649 kernel: x2apic enabled Sep 13 00:53:27.080655 kernel: Switched APIC routing to physical x2apic. Sep 13 00:53:27.080664 kernel: kvm-guest: setup PV IPIs Sep 13 00:53:27.080671 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:53:27.080677 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:53:27.080686 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 13 00:53:27.080693 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:53:27.080699 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:53:27.080706 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:53:27.080719 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:53:27.080726 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:53:27.080733 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:53:27.080741 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:53:27.080748 kernel: active return thunk: retbleed_return_thunk Sep 13 00:53:27.080755 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:53:27.080762 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:53:27.080769 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:53:27.080776 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:53:27.080784 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:53:27.080792 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:53:27.080798 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:53:27.080805 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:53:27.080812 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:53:27.080819 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:53:27.080826 kernel: LSM: Security Framework initializing Sep 13 00:53:27.080834 kernel: SELinux: Initializing. Sep 13 00:53:27.080841 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:27.080848 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:27.080855 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:53:27.080862 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:53:27.080869 kernel: ... version: 0 Sep 13 00:53:27.080876 kernel: ... bit width: 48 Sep 13 00:53:27.080883 kernel: ... generic registers: 6 Sep 13 00:53:27.080890 kernel: ... value mask: 0000ffffffffffff Sep 13 00:53:27.080898 kernel: ... max period: 00007fffffffffff Sep 13 00:53:27.080905 kernel: ... fixed-purpose events: 0 Sep 13 00:53:27.080911 kernel: ... event mask: 000000000000003f Sep 13 00:53:27.080918 kernel: signal: max sigframe size: 1776 Sep 13 00:53:27.080925 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:53:27.080932 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:53:27.080938 kernel: x86: Booting SMP configuration: Sep 13 00:53:27.080945 kernel: .... node #0, CPUs: #1 Sep 13 00:53:27.080952 kernel: kvm-clock: cpu 1, msr 4119f041, secondary cpu clock Sep 13 00:53:27.080959 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:53:27.080967 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 13 00:53:27.080974 kernel: #2 Sep 13 00:53:27.080981 kernel: kvm-clock: cpu 2, msr 4119f081, secondary cpu clock Sep 13 00:53:27.080987 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:53:27.080994 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 13 00:53:27.081003 kernel: #3 Sep 13 00:53:27.081010 kernel: kvm-clock: cpu 3, msr 4119f0c1, secondary cpu clock Sep 13 00:53:27.081017 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:53:27.081024 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 13 00:53:27.081032 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:53:27.081039 kernel: smpboot: Max logical packages: 1 Sep 13 00:53:27.081046 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 13 00:53:27.081053 kernel: devtmpfs: initialized Sep 13 00:53:27.081060 kernel: x86/mm: Memory block size: 128MB Sep 13 00:53:27.081067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:53:27.081074 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:53:27.081080 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:53:27.081087 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:53:27.081095 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:53:27.081102 kernel: audit: type=2000 audit(1757724806.312:1): state=initialized audit_enabled=0 res=1 Sep 13 00:53:27.081109 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:53:27.081116 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:53:27.081123 kernel: cpuidle: using governor menu Sep 13 00:53:27.081130 kernel: ACPI: bus type PCI registered Sep 13 00:53:27.081137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:53:27.081143 kernel: dca service started, version 1.12.1 Sep 13 00:53:27.081150 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:53:27.081159 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:53:27.081165 kernel: PCI: Using configuration type 1 for base access Sep 13 00:53:27.081172 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:53:27.081179 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:53:27.081186 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:53:27.081193 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:53:27.081212 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:53:27.081219 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:53:27.081235 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:53:27.081245 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:53:27.081251 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:53:27.081258 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:53:27.081265 kernel: ACPI: Interpreter enabled Sep 13 00:53:27.081272 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:53:27.081279 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:53:27.081286 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:53:27.081292 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:53:27.081299 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:53:27.081465 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:53:27.081553 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:53:27.081628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:53:27.081637 kernel: PCI host bridge to bus 0000:00 Sep 13 00:53:27.081754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:53:27.081843 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:53:27.081914 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:53:27.081980 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:53:27.082044 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:27.082109 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:27.082174 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:53:27.082339 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:53:27.082468 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:53:27.082572 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:53:27.082649 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:53:27.082794 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:53:27.082894 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:53:27.082986 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:53:27.083063 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:53:27.083142 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:53:27.083240 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:53:27.083324 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:53:27.083399 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:53:27.083474 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:53:27.083557 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:53:27.083645 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:53:27.083722 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:53:27.083796 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:53:27.083869 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:53:27.083940 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:53:27.084030 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:53:27.084106 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:53:27.084192 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:53:27.084289 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:53:27.084361 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:53:27.084445 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:53:27.084529 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:53:27.084539 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:53:27.084546 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:53:27.084554 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:53:27.084561 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:53:27.084571 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:53:27.084578 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:53:27.084585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:53:27.084591 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:53:27.084598 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:53:27.084605 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:53:27.084612 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:53:27.084619 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:53:27.084626 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:53:27.084634 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:53:27.084641 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:53:27.084648 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:53:27.084655 kernel: iommu: Default domain type: Translated Sep 13 00:53:27.084662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:53:27.084735 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:53:27.084810 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:53:27.084881 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:53:27.084893 kernel: vgaarb: loaded Sep 13 00:53:27.084900 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:53:27.084907 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:53:27.084914 kernel: PTP clock support registered Sep 13 00:53:27.084921 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:53:27.084928 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:53:27.084935 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:53:27.084942 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:53:27.084948 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:53:27.084957 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:53:27.084964 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:53:27.084971 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:53:27.084978 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:53:27.084985 kernel: pnp: PnP ACPI init Sep 13 00:53:27.085089 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:53:27.085101 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:53:27.085108 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:53:27.085117 kernel: NET: Registered PF_INET protocol family Sep 13 00:53:27.085124 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:53:27.085131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:53:27.085138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:53:27.085146 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:53:27.085153 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:53:27.085160 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:53:27.085166 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:27.085173 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:27.085182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:53:27.085189 kernel: NET: Registered PF_XDP protocol family Sep 13 00:53:27.085272 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:53:27.085338 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:53:27.085403 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:53:27.085467 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:53:27.085540 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:27.085606 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:27.085618 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:53:27.085625 kernel: Initialise system trusted keyrings Sep 13 00:53:27.085631 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:53:27.085638 kernel: Key type asymmetric registered Sep 13 00:53:27.085645 kernel: Asymmetric key parser 'x509' registered Sep 13 00:53:27.085652 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:53:27.085659 kernel: io scheduler mq-deadline registered Sep 13 00:53:27.085666 kernel: io scheduler kyber registered Sep 13 00:53:27.085673 kernel: io scheduler bfq registered Sep 13 00:53:27.085680 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:53:27.085689 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:53:27.085696 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:53:27.085703 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:53:27.085710 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:53:27.085716 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:53:27.085723 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:53:27.085730 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:53:27.085737 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:53:27.085744 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:53:27.085830 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:53:27.085900 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:53:27.085967 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:53:26 UTC (1757724806) Sep 13 00:53:27.086034 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:53:27.086043 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:53:27.086050 kernel: Segment Routing with IPv6 Sep 13 00:53:27.086057 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:53:27.086064 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:53:27.086073 kernel: Key type dns_resolver registered Sep 13 00:53:27.086080 kernel: IPI shorthand broadcast: enabled Sep 13 00:53:27.086087 kernel: sched_clock: Marking stable (493073077, 101427687)->(628174637, -33673873) Sep 13 00:53:27.086094 kernel: registered taskstats version 1 Sep 13 00:53:27.086101 kernel: Loading compiled-in X.509 certificates Sep 13 00:53:27.086108 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:53:27.086115 kernel: Key type .fscrypt registered Sep 13 00:53:27.086122 kernel: Key type fscrypt-provisioning registered Sep 13 00:53:27.086129 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:53:27.086137 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:53:27.086144 kernel: ima: No architecture policies found Sep 13 00:53:27.086150 kernel: clk: Disabling unused clocks Sep 13 00:53:27.086157 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:53:27.086164 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:53:27.086171 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:53:27.086178 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:53:27.086185 kernel: Run /init as init process Sep 13 00:53:27.086193 kernel: with arguments: Sep 13 00:53:27.086222 kernel: /init Sep 13 00:53:27.086230 kernel: with environment: Sep 13 00:53:27.086237 kernel: HOME=/ Sep 13 00:53:27.086244 kernel: TERM=linux Sep 13 00:53:27.086250 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:53:27.086263 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:27.086273 systemd[1]: Detected virtualization kvm. Sep 13 00:53:27.086282 systemd[1]: Detected architecture x86-64. Sep 13 00:53:27.086290 systemd[1]: Running in initrd. Sep 13 00:53:27.086297 systemd[1]: No hostname configured, using default hostname. Sep 13 00:53:27.086304 systemd[1]: Hostname set to . Sep 13 00:53:27.086312 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:27.086319 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:53:27.086326 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:27.086334 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:27.086341 systemd[1]: Reached target paths.target. Sep 13 00:53:27.086350 systemd[1]: Reached target slices.target. Sep 13 00:53:27.086364 systemd[1]: Reached target swap.target. Sep 13 00:53:27.086373 systemd[1]: Reached target timers.target. Sep 13 00:53:27.086381 systemd[1]: Listening on iscsid.socket. Sep 13 00:53:27.086388 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:53:27.086397 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:27.086405 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:27.086413 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:27.086420 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:27.086428 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:27.086435 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:27.086443 systemd[1]: Reached target sockets.target. Sep 13 00:53:27.086450 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:27.086459 systemd[1]: Finished network-cleanup.service. Sep 13 00:53:27.086468 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:53:27.086476 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:27.086483 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:27.086491 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:27.086505 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:53:27.086513 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:27.086521 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:53:27.086528 kernel: audit: type=1130 audit(1757724807.082:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.086543 systemd-journald[199]: Journal started Sep 13 00:53:27.086582 systemd-journald[199]: Runtime Journal (/run/log/journal/197168969f0f41e9afafb1f40fabc113) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:53:27.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.077916 systemd-modules-load[200]: Inserted module 'overlay' Sep 13 00:53:27.119581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:27.119636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:53:27.119682 kernel: Bridge firewalling registered Sep 13 00:53:27.119710 systemd[1]: Started systemd-journald.service. Sep 13 00:53:27.119744 kernel: audit: type=1130 audit(1757724807.118:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.099481 systemd-resolved[201]: Positive Trust Anchors: Sep 13 00:53:27.131590 kernel: audit: type=1130 audit(1757724807.118:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.131614 kernel: audit: type=1130 audit(1757724807.127:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.099492 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:27.099525 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:27.102355 systemd-resolved[201]: Defaulting to hostname 'linux'. Sep 13 00:53:27.145564 kernel: audit: type=1130 audit(1757724807.140:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.118334 systemd-modules-load[200]: Inserted module 'br_netfilter' Sep 13 00:53:27.118655 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:27.119109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:27.127851 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:27.134040 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:53:27.141486 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:53:27.153697 kernel: SCSI subsystem initialized Sep 13 00:53:27.161641 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:53:27.171284 kernel: audit: type=1130 audit(1757724807.162:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.171303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:53:27.171313 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:53:27.171324 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:53:27.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.163792 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:53:27.173439 systemd-modules-load[200]: Inserted module 'dm_multipath' Sep 13 00:53:27.174246 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:27.179083 kernel: audit: type=1130 audit(1757724807.175:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.179167 dracut-cmdline[217]: dracut-dracut-053 Sep 13 00:53:27.179167 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:27.176290 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:27.193709 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:27.197993 kernel: audit: type=1130 audit(1757724807.193:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.240220 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:53:27.256227 kernel: iscsi: registered transport (tcp) Sep 13 00:53:27.278307 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:53:27.278336 kernel: QLogic iSCSI HBA Driver Sep 13 00:53:27.311010 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:53:27.315429 kernel: audit: type=1130 audit(1757724807.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.315456 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:53:27.362252 kernel: raid6: avx2x4 gen() 30068 MB/s Sep 13 00:53:27.379219 kernel: raid6: avx2x4 xor() 8479 MB/s Sep 13 00:53:27.396216 kernel: raid6: avx2x2 gen() 29553 MB/s Sep 13 00:53:27.413219 kernel: raid6: avx2x2 xor() 18908 MB/s Sep 13 00:53:27.430226 kernel: raid6: avx2x1 gen() 25936 MB/s Sep 13 00:53:27.447236 kernel: raid6: avx2x1 xor() 14955 MB/s Sep 13 00:53:27.464247 kernel: raid6: sse2x4 gen() 14690 MB/s Sep 13 00:53:27.481254 kernel: raid6: sse2x4 xor() 7680 MB/s Sep 13 00:53:27.498256 kernel: raid6: sse2x2 gen() 16031 MB/s Sep 13 00:53:27.515233 kernel: raid6: sse2x2 xor() 9635 MB/s Sep 13 00:53:27.532226 kernel: raid6: sse2x1 gen() 11780 MB/s Sep 13 00:53:27.549592 kernel: raid6: sse2x1 xor() 7247 MB/s Sep 13 00:53:27.549649 kernel: raid6: using algorithm avx2x4 gen() 30068 MB/s Sep 13 00:53:27.549659 kernel: raid6: .... xor() 8479 MB/s, rmw enabled Sep 13 00:53:27.550274 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:53:27.563227 kernel: xor: automatically using best checksumming function avx Sep 13 00:53:27.716257 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:53:27.724466 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:53:27.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.725000 audit: BPF prog-id=7 op=LOAD Sep 13 00:53:27.726000 audit: BPF prog-id=8 op=LOAD Sep 13 00:53:27.726667 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:27.739051 systemd-udevd[401]: Using default interface naming scheme 'v252'. Sep 13 00:53:27.743094 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:27.743930 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:53:27.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.755813 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:53:27.778466 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:53:27.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.780062 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:27.819593 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:27.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:27.853311 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:53:27.861944 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:53:27.861966 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:53:27.861978 kernel: GPT:9289727 != 19775487 Sep 13 00:53:27.861990 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:53:27.862002 kernel: GPT:9289727 != 19775487 Sep 13 00:53:27.862014 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:53:27.862033 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:27.870238 kernel: libata version 3.00 loaded. Sep 13 00:53:27.883243 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:53:27.906107 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:53:27.906126 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:53:27.906136 kernel: AES CTR mode by8 optimization enabled Sep 13 00:53:27.906145 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:53:27.906260 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:53:27.906347 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (448) Sep 13 00:53:27.906357 kernel: scsi host0: ahci Sep 13 00:53:27.906460 kernel: scsi host1: ahci Sep 13 00:53:27.906566 kernel: scsi host2: ahci Sep 13 00:53:27.906663 kernel: scsi host3: ahci Sep 13 00:53:27.906759 kernel: scsi host4: ahci Sep 13 00:53:27.906853 kernel: scsi host5: ahci Sep 13 00:53:27.906943 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:53:27.906953 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:53:27.906961 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:53:27.906970 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:53:27.906979 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:53:27.906988 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:53:27.896433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:53:27.939563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:53:27.945626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:53:27.946657 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:53:27.951726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:27.956463 systemd[1]: Starting disk-uuid.service... Sep 13 00:53:27.964461 disk-uuid[523]: Primary Header is updated. Sep 13 00:53:27.964461 disk-uuid[523]: Secondary Entries is updated. Sep 13 00:53:27.964461 disk-uuid[523]: Secondary Header is updated. Sep 13 00:53:27.968012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:27.971214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:27.974218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:28.219575 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:28.219654 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:28.219665 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:28.221228 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:28.222225 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:53:28.223221 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:28.223241 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:53:28.223992 kernel: ata3.00: applying bridge limits Sep 13 00:53:28.225282 kernel: ata3.00: configured for UDMA/100 Sep 13 00:53:28.226229 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:53:28.258247 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:53:28.274900 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:53:28.274913 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:53:28.973984 disk-uuid[524]: The operation has completed successfully. Sep 13 00:53:28.975257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:53:28.996640 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:53:28.996730 systemd[1]: Finished disk-uuid.service. Sep 13 00:53:28.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:28.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.002243 systemd[1]: Starting verity-setup.service... Sep 13 00:53:29.015249 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:53:29.035090 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:53:29.036499 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:53:29.038632 systemd[1]: Finished verity-setup.service. Sep 13 00:53:29.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.106216 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:53:29.106229 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:53:29.107783 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:53:29.109842 systemd[1]: Starting ignition-setup.service... Sep 13 00:53:29.111833 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:53:29.118362 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:29.118385 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:29.118396 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:29.127558 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:53:29.136160 systemd[1]: Finished ignition-setup.service. Sep 13 00:53:29.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.138376 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:53:29.180902 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:53:29.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.183034 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:29.182000 audit: BPF prog-id=9 op=LOAD Sep 13 00:53:29.203758 systemd-networkd[716]: lo: Link UP Sep 13 00:53:29.204031 systemd-networkd[716]: lo: Gained carrier Sep 13 00:53:29.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.206803 ignition[641]: Ignition 2.14.0 Sep 13 00:53:29.204531 systemd-networkd[716]: Enumeration completed Sep 13 00:53:29.206811 ignition[641]: Stage: fetch-offline Sep 13 00:53:29.204602 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:29.206860 ignition[641]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:29.204833 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:29.206870 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:29.206039 systemd[1]: Reached target network.target. Sep 13 00:53:29.206960 ignition[641]: parsed url from cmdline: "" Sep 13 00:53:29.206060 systemd-networkd[716]: eth0: Link UP Sep 13 00:53:29.206963 ignition[641]: no config URL provided Sep 13 00:53:29.206063 systemd-networkd[716]: eth0: Gained carrier Sep 13 00:53:29.206967 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:53:29.208327 systemd[1]: Starting iscsiuio.service... Sep 13 00:53:29.206974 ignition[641]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:53:29.206990 ignition[641]: op(1): [started] loading QEMU firmware config module Sep 13 00:53:29.207000 ignition[641]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:53:29.214243 ignition[641]: op(1): [finished] loading QEMU firmware config module Sep 13 00:53:29.261638 ignition[641]: parsing config with SHA512: ad4245e34a9336552d3b498f0c16f9d96752aa4c605ceea4be16df9ccba17d4da9f277c1bda3ba7e775686ec2a708e6116d1565d9ba51580f2f27836c2b45ce5 Sep 13 00:53:29.310085 unknown[641]: fetched base config from "system" Sep 13 00:53:29.311031 unknown[641]: fetched user config from "qemu" Sep 13 00:53:29.312419 ignition[641]: fetch-offline: fetch-offline passed Sep 13 00:53:29.313478 ignition[641]: Ignition finished successfully Sep 13 00:53:29.315436 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:53:29.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.317325 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:53:29.318300 systemd[1]: Starting ignition-kargs.service... Sep 13 00:53:29.319846 systemd[1]: Started iscsiuio.service. Sep 13 00:53:29.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.322180 systemd[1]: Starting iscsid.service... Sep 13 00:53:29.325309 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:53:29.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.330745 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:29.330745 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:53:29.330745 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:53:29.330745 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:53:29.330745 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:29.330745 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:53:29.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.328644 systemd[1]: Started iscsid.service. Sep 13 00:53:29.345306 ignition[722]: Ignition 2.14.0 Sep 13 00:53:29.330062 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:53:29.345312 ignition[722]: Stage: kargs Sep 13 00:53:29.340417 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:53:29.345426 ignition[722]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:29.341462 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:53:29.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.345435 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:29.343452 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:29.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.346766 ignition[722]: kargs: kargs passed Sep 13 00:53:29.344308 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:29.346804 ignition[722]: Ignition finished successfully Sep 13 00:53:29.345683 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:53:29.351743 systemd[1]: Finished ignition-kargs.service. Sep 13 00:53:29.353706 systemd[1]: Starting ignition-disks.service... Sep 13 00:53:29.355473 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:53:29.365387 ignition[743]: Ignition 2.14.0 Sep 13 00:53:29.365398 ignition[743]: Stage: disks Sep 13 00:53:29.365508 ignition[743]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:29.365518 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:29.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.367171 systemd[1]: Finished ignition-disks.service. Sep 13 00:53:29.366535 ignition[743]: disks: disks passed Sep 13 00:53:29.368478 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:53:29.366572 ignition[743]: Ignition finished successfully Sep 13 00:53:29.370279 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:29.371083 systemd[1]: Reached target local-fs.target. Sep 13 00:53:29.372599 systemd[1]: Reached target sysinit.target. Sep 13 00:53:29.372651 systemd[1]: Reached target basic.target. Sep 13 00:53:29.373730 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:53:29.386356 systemd-fsck[752]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:53:29.392411 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:53:29.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.395643 systemd[1]: Mounting sysroot.mount... Sep 13 00:53:29.402217 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:53:29.402821 systemd[1]: Mounted sysroot.mount. Sep 13 00:53:29.404499 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:53:29.407513 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:53:29.409556 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:53:29.409611 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:53:29.413113 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:53:29.416147 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:53:29.418800 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:53:29.423228 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:53:29.428140 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:53:29.431310 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:53:29.435073 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:53:29.464395 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:53:29.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.465267 systemd[1]: Starting ignition-mount.service... Sep 13 00:53:29.467183 systemd[1]: Starting sysroot-boot.service... Sep 13 00:53:29.472473 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:53:29.575066 systemd[1]: Finished sysroot-boot.service. Sep 13 00:53:29.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:29.581577 ignition[804]: INFO : Ignition 2.14.0 Sep 13 00:53:29.581577 ignition[804]: INFO : Stage: mount Sep 13 00:53:29.583362 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:29.583362 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:29.583362 ignition[804]: INFO : mount: mount passed Sep 13 00:53:29.583362 ignition[804]: INFO : Ignition finished successfully Sep 13 00:53:29.588621 systemd[1]: Finished ignition-mount.service. Sep 13 00:53:29.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:30.046102 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:53:30.054227 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Sep 13 00:53:30.054251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:30.055648 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:53:30.055660 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:53:30.059538 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:53:30.061109 systemd[1]: Starting ignition-files.service... Sep 13 00:53:30.077035 ignition[833]: INFO : Ignition 2.14.0 Sep 13 00:53:30.077035 ignition[833]: INFO : Stage: files Sep 13 00:53:30.078640 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:30.078640 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:30.081514 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:53:30.083309 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:53:30.083309 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:53:30.087305 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:53:30.088721 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:53:30.090481 unknown[833]: wrote ssh authorized keys file for user: core Sep 13 00:53:30.091487 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:53:30.093077 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:30.094869 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:53:30.146264 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:53:30.450867 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:53:30.453026 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:53:30.453026 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:53:30.687809 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:53:30.756363 systemd-networkd[716]: eth0: Gained IPv6LL Sep 13 00:53:30.809253 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:30.811132 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:53:31.050015 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:53:31.496868 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:53:31.496868 ignition[833]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:53:31.500759 ignition[833]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:53:31.535011 ignition[833]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:53:31.537987 ignition[833]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:53:31.537987 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:31.537987 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:53:31.537987 ignition[833]: INFO : files: files passed Sep 13 00:53:31.537987 ignition[833]: INFO : Ignition finished successfully Sep 13 00:53:31.561432 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 13 00:53:31.561457 kernel: audit: type=1130 audit(1757724811.537:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.561469 kernel: audit: type=1130 audit(1757724811.549:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.561479 kernel: audit: type=1131 audit(1757724811.549:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.561488 kernel: audit: type=1130 audit(1757724811.556:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.536631 systemd[1]: Finished ignition-files.service. Sep 13 00:53:31.538883 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:53:31.544239 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:53:31.566283 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:53:31.544873 systemd[1]: Starting ignition-quench.service... Sep 13 00:53:31.568732 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:53:31.547423 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:53:31.547520 systemd[1]: Finished ignition-quench.service. Sep 13 00:53:31.550400 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:53:31.556854 systemd[1]: Reached target ignition-complete.target. Sep 13 00:53:31.562078 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:53:31.582399 kernel: audit: type=1130 audit(1757724811.575:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.582419 kernel: audit: type=1131 audit(1757724811.575:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.573918 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:53:31.573996 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:53:31.575421 systemd[1]: Reached target initrd-fs.target. Sep 13 00:53:31.582400 systemd[1]: Reached target initrd.target. Sep 13 00:53:31.583250 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:53:31.583904 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:53:31.594563 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:53:31.599705 kernel: audit: type=1130 audit(1757724811.595:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.596029 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:53:31.604683 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:53:31.605599 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:53:31.607234 systemd[1]: Stopped target timers.target. Sep 13 00:53:31.608805 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:53:31.614890 kernel: audit: type=1131 audit(1757724811.610:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.608897 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:53:31.610486 systemd[1]: Stopped target initrd.target. Sep 13 00:53:31.614955 systemd[1]: Stopped target basic.target. Sep 13 00:53:31.616513 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:53:31.618091 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:53:31.619687 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:53:31.621404 systemd[1]: Stopped target remote-fs.target. Sep 13 00:53:31.623008 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:53:31.624726 systemd[1]: Stopped target sysinit.target. Sep 13 00:53:31.626265 systemd[1]: Stopped target local-fs.target. Sep 13 00:53:31.627808 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:53:31.629360 systemd[1]: Stopped target swap.target. Sep 13 00:53:31.636697 kernel: audit: type=1131 audit(1757724811.632:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.630792 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:53:31.630883 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:53:31.642976 kernel: audit: type=1131 audit(1757724811.638:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.632478 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:53:31.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.636745 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:53:31.636842 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:53:31.638600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:53:31.638687 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:53:31.643084 systemd[1]: Stopped target paths.target. Sep 13 00:53:31.644574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:53:31.648265 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:53:31.649255 systemd[1]: Stopped target slices.target. Sep 13 00:53:31.650684 systemd[1]: Stopped target sockets.target. Sep 13 00:53:31.652513 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:53:31.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.652578 systemd[1]: Closed iscsid.socket. Sep 13 00:53:31.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.654053 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:53:31.654117 systemd[1]: Closed iscsiuio.socket. Sep 13 00:53:31.655469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:53:31.655561 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:53:31.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.657161 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:53:31.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.669011 ignition[873]: INFO : Ignition 2.14.0 Sep 13 00:53:31.669011 ignition[873]: INFO : Stage: umount Sep 13 00:53:31.669011 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:53:31.669011 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:53:31.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.657262 systemd[1]: Stopped ignition-files.service. Sep 13 00:53:31.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.680282 ignition[873]: INFO : umount: umount passed Sep 13 00:53:31.680282 ignition[873]: INFO : Ignition finished successfully Sep 13 00:53:31.659564 systemd[1]: Stopping ignition-mount.service... Sep 13 00:53:31.662064 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:53:31.662895 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:53:31.663144 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:53:31.665425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:53:31.665544 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:53:31.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.670426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:53:31.670506 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:53:31.672274 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:53:31.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.672342 systemd[1]: Stopped ignition-mount.service. Sep 13 00:53:31.674875 systemd[1]: Stopped target network.target. Sep 13 00:53:31.676457 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:53:31.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.676495 systemd[1]: Stopped ignition-disks.service. Sep 13 00:53:31.677392 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:53:31.677426 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:53:31.705000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:53:31.678724 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:53:31.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.678759 systemd[1]: Stopped ignition-setup.service. Sep 13 00:53:31.680304 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:53:31.682112 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:53:31.684655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:53:31.686251 systemd-networkd[716]: eth0: DHCPv6 lease lost Sep 13 00:53:31.741000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:53:31.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.687909 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:53:31.687992 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:53:31.691021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:53:31.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.691050 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:53:31.693335 systemd[1]: Stopping network-cleanup.service... Sep 13 00:53:31.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.694172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:53:31.694227 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:53:31.694326 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:53:31.694357 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:53:31.695910 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:53:31.695942 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:53:31.697816 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:53:31.699509 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:53:31.699860 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:53:31.699945 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:53:31.705944 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:53:31.706015 systemd[1]: Stopped network-cleanup.service. Sep 13 00:53:31.741773 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:53:31.741884 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:53:31.743434 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:53:31.743468 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:53:31.745420 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:53:31.745459 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:53:31.747024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:53:31.747065 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:53:31.747174 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:53:31.747221 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:53:31.747353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:53:31.747391 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:53:31.748247 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:53:31.748614 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:53:31.748660 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:53:31.751398 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:53:31.751431 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:53:31.753408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:53:31.753444 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:53:31.755333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:53:31.755726 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:53:31.755795 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:53:31.926605 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:53:31.926720 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:53:31.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.928642 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:53:31.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:31.930107 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:53:31.930148 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:53:31.931687 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:53:31.948999 systemd[1]: Switching root. Sep 13 00:53:31.968128 iscsid[724]: iscsid shutting down. Sep 13 00:53:31.968895 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Sep 13 00:53:31.968934 systemd-journald[199]: Journal stopped Sep 13 00:53:34.736491 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:53:34.736538 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:53:34.736551 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:53:34.736563 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:53:34.736573 kernel: SELinux: policy capability open_perms=1 Sep 13 00:53:34.736584 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:53:34.736595 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:53:34.736606 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:53:34.736618 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:53:34.736631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:53:34.736640 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:53:34.736651 systemd[1]: Successfully loaded SELinux policy in 38.831ms. Sep 13 00:53:34.736664 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.300ms. Sep 13 00:53:34.736675 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:34.736686 systemd[1]: Detected virtualization kvm. Sep 13 00:53:34.736696 systemd[1]: Detected architecture x86-64. Sep 13 00:53:34.736706 systemd[1]: Detected first boot. Sep 13 00:53:34.736718 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:34.736730 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:53:34.736740 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:53:34.736751 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:34.736763 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:34.736775 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:34.736788 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:53:34.736798 systemd[1]: Stopped iscsiuio.service. Sep 13 00:53:34.736808 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:53:34.736819 systemd[1]: Stopped iscsid.service. Sep 13 00:53:34.736829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:53:34.736840 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:53:34.736851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:53:34.736863 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:53:34.736873 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:53:34.736883 systemd[1]: Created slice system-getty.slice. Sep 13 00:53:34.736896 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:53:34.736907 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:53:34.736917 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:53:34.736928 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:53:34.736938 systemd[1]: Created slice user.slice. Sep 13 00:53:34.736949 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:34.736961 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:53:34.736972 systemd[1]: Set up automount boot.automount. Sep 13 00:53:34.736982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:53:34.736992 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:53:34.737003 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:53:34.737014 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:53:34.737024 systemd[1]: Reached target integritysetup.target. Sep 13 00:53:34.737034 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:34.737046 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:34.737057 systemd[1]: Reached target slices.target. Sep 13 00:53:34.737068 systemd[1]: Reached target swap.target. Sep 13 00:53:34.737078 systemd[1]: Reached target torcx.target. Sep 13 00:53:34.737088 systemd[1]: Reached target veritysetup.target. Sep 13 00:53:34.737099 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:53:34.737109 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:53:34.737120 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:34.737130 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:34.737140 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:34.737152 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:53:34.737162 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:53:34.737173 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:53:34.737183 systemd[1]: Mounting media.mount... Sep 13 00:53:34.737194 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:34.737216 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:53:34.737227 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:53:34.737237 systemd[1]: Mounting tmp.mount... Sep 13 00:53:34.737248 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:53:34.737261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:34.737271 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:34.737282 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:53:34.737292 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:34.737304 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:34.737320 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:34.737331 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:53:34.737342 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:34.737353 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:53:34.737365 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:53:34.737376 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:53:34.737386 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:53:34.737397 kernel: fuse: init (API version 7.34) Sep 13 00:53:34.737407 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:53:34.737418 systemd[1]: Stopped systemd-journald.service. Sep 13 00:53:34.737428 kernel: loop: module loaded Sep 13 00:53:34.737438 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:34.737449 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:34.737461 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:53:34.737472 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:53:34.737482 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:34.737493 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:53:34.737503 systemd[1]: Stopped verity-setup.service. Sep 13 00:53:34.737514 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:34.737527 systemd-journald[987]: Journal started Sep 13 00:53:34.737567 systemd-journald[987]: Runtime Journal (/run/log/journal/197168969f0f41e9afafb1f40fabc113) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:53:32.029000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:53:32.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:53:32.466000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:53:32.466000 audit: BPF prog-id=10 op=LOAD Sep 13 00:53:32.466000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:53:32.466000 audit: BPF prog-id=11 op=LOAD Sep 13 00:53:32.466000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:53:32.500000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:53:32.500000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b4 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:32.500000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:53:32.501000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:53:32.501000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5999 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:32.501000 audit: CWD cwd="/" Sep 13 00:53:32.501000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.501000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:32.501000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:53:34.592000 audit: BPF prog-id=12 op=LOAD Sep 13 00:53:34.592000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:53:34.592000 audit: BPF prog-id=13 op=LOAD Sep 13 00:53:34.592000 audit: BPF prog-id=14 op=LOAD Sep 13 00:53:34.592000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:53:34.592000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:53:34.593000 audit: BPF prog-id=15 op=LOAD Sep 13 00:53:34.593000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:53:34.593000 audit: BPF prog-id=16 op=LOAD Sep 13 00:53:34.594000 audit: BPF prog-id=17 op=LOAD Sep 13 00:53:34.594000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:53:34.594000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:53:34.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.603000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:53:34.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.717000 audit: BPF prog-id=18 op=LOAD Sep 13 00:53:34.717000 audit: BPF prog-id=19 op=LOAD Sep 13 00:53:34.717000 audit: BPF prog-id=20 op=LOAD Sep 13 00:53:34.717000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:53:34.717000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:53:34.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.734000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:53:34.734000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc5f1bdfa0 a2=4000 a3=7ffc5f1be03c items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:34.734000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:53:34.590924 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:53:32.498274 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:34.590936 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:53:32.498515 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:53:34.594796 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:53:32.498538 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:53:32.498573 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:53:34.739216 systemd[1]: Started systemd-journald.service. Sep 13 00:53:32.498586 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:53:34.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:32.498621 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:53:32.498636 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:53:32.498881 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:53:34.739518 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:53:32.498928 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:53:32.498945 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:53:32.499564 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:53:32.499603 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:53:32.499625 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:53:32.499642 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:53:32.499661 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:53:32.499677 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:53:34.320070 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:53:34.320357 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:53:34.320456 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:53:34.320606 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:53:34.320652 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:53:34.320706 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2025-09-13T00:53:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:53:34.740560 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:53:34.741342 systemd[1]: Mounted media.mount. Sep 13 00:53:34.742056 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:53:34.742880 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:53:34.743731 systemd[1]: Mounted tmp.mount. Sep 13 00:53:34.744603 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:53:34.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.745645 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:34.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.746648 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:53:34.746792 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:53:34.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.747804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:34.748005 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:34.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.748996 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:34.749149 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:34.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.750107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:34.750449 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:34.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.751480 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:53:34.751633 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:53:34.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.752600 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:34.752781 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:34.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.753790 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:34.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.754889 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:53:34.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.755998 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:53:34.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.757249 systemd[1]: Reached target network-pre.target. Sep 13 00:53:34.759100 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:53:34.760941 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:53:34.761687 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:53:34.762897 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:53:34.764660 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:53:34.765735 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:34.769668 systemd-journald[987]: Time spent on flushing to /var/log/journal/197168969f0f41e9afafb1f40fabc113 is 13.371ms for 1100 entries. Sep 13 00:53:34.769668 systemd-journald[987]: System Journal (/var/log/journal/197168969f0f41e9afafb1f40fabc113) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:53:34.863959 systemd-journald[987]: Received client request to flush runtime journal. Sep 13 00:53:34.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.766610 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:53:34.767451 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:34.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:34.768500 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:34.770370 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:53:34.866558 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:53:34.775051 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:53:34.782976 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:53:34.809630 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:34.811414 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:53:34.826513 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:53:34.827662 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:53:34.847749 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:34.850452 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:53:34.852806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:34.865014 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:53:34.872800 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:34.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:35.774956 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:53:35.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:35.776000 audit: BPF prog-id=21 op=LOAD Sep 13 00:53:35.776000 audit: BPF prog-id=22 op=LOAD Sep 13 00:53:35.776000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:53:35.776000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:53:35.778171 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:35.795557 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Sep 13 00:53:35.809663 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:35.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:35.811000 audit: BPF prog-id=23 op=LOAD Sep 13 00:53:35.812675 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:35.819000 audit: BPF prog-id=24 op=LOAD Sep 13 00:53:35.819000 audit: BPF prog-id=25 op=LOAD Sep 13 00:53:35.819000 audit: BPF prog-id=26 op=LOAD Sep 13 00:53:35.820771 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:53:35.833190 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:53:35.855570 systemd[1]: Started systemd-userdbd.service. Sep 13 00:53:35.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:35.871275 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:35.893270 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:53:35.905234 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:53:35.916791 systemd-networkd[1022]: lo: Link UP Sep 13 00:53:35.917303 systemd-networkd[1022]: lo: Gained carrier Sep 13 00:53:35.917943 systemd-networkd[1022]: Enumeration completed Sep 13 00:53:35.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:35.918112 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:35.919463 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:35.920678 systemd-networkd[1022]: eth0: Link UP Sep 13 00:53:35.920759 systemd-networkd[1022]: eth0: Gained carrier Sep 13 00:53:35.924000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:53:35.924000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cbf6d37e70 a1=338ec a2=7f017ee7abc5 a3=5 items=110 ppid=1014 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:35.924000 audit: CWD cwd="/" Sep 13 00:53:35.924000 audit: PATH item=0 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=1 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=2 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=3 name=(null) inode=12924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=4 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=5 name=(null) inode=12925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=6 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=7 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=8 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=9 name=(null) inode=12927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=10 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=11 name=(null) inode=12928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=12 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=13 name=(null) inode=12929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=14 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=15 name=(null) inode=12930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=16 name=(null) inode=12926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=17 name=(null) inode=12931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=18 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=19 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=20 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=21 name=(null) inode=12933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=22 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=23 name=(null) inode=12934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=24 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=25 name=(null) inode=12935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=26 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=27 name=(null) inode=12936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=28 name=(null) inode=12932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=29 name=(null) inode=12937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=30 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=31 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=32 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=33 name=(null) inode=12939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=34 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=35 name=(null) inode=12940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=36 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=37 name=(null) inode=12941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=38 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=39 name=(null) inode=12942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=40 name=(null) inode=12938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=41 name=(null) inode=12943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=42 name=(null) inode=12923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=43 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=44 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=45 name=(null) inode=12945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=46 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=47 name=(null) inode=12946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=48 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=49 name=(null) inode=12947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=50 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=51 name=(null) inode=12948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=52 name=(null) inode=12944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=53 name=(null) inode=12949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=54 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=55 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=56 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=57 name=(null) inode=12951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=58 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=59 name=(null) inode=12952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=60 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=61 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=62 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=63 name=(null) inode=12954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=64 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=65 name=(null) inode=12955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=66 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=67 name=(null) inode=12956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=68 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=69 name=(null) inode=12957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=70 name=(null) inode=12953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=71 name=(null) inode=12958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=72 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=73 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=74 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=75 name=(null) inode=12960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=76 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=77 name=(null) inode=12961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=78 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=79 name=(null) inode=12962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=80 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=81 name=(null) inode=12963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=82 name=(null) inode=12959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=83 name=(null) inode=12964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=84 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=85 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=86 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=87 name=(null) inode=12966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=88 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=89 name=(null) inode=12967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=90 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=91 name=(null) inode=12968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=92 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=93 name=(null) inode=12969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=94 name=(null) inode=12965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=95 name=(null) inode=12970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=96 name=(null) inode=12950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=97 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=98 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=99 name=(null) inode=12972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=100 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=101 name=(null) inode=12973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=102 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=103 name=(null) inode=12974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=104 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=105 name=(null) inode=12975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=106 name=(null) inode=12971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=107 name=(null) inode=12976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PATH item=109 name=(null) inode=12977 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:53:35.924000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:53:35.937814 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:53:35.939769 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:53:35.939924 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:53:35.939045 systemd-networkd[1022]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:53:35.958232 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:53:35.961223 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:53:36.010600 kernel: kvm: Nested Virtualization enabled Sep 13 00:53:36.010686 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:53:36.010720 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:53:36.012217 kernel: SVM: Virtual GIF supported Sep 13 00:53:36.027226 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:53:36.054662 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:53:36.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.056803 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:53:36.125063 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:36.154289 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:53:36.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.155410 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:36.157388 systemd[1]: Starting lvm2-activation.service... Sep 13 00:53:36.160981 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:53:36.187936 systemd[1]: Finished lvm2-activation.service. Sep 13 00:53:36.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.188939 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:53:36.189812 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:53:36.189836 systemd[1]: Reached target local-fs.target. Sep 13 00:53:36.190663 systemd[1]: Reached target machines.target. Sep 13 00:53:36.192807 systemd[1]: Starting ldconfig.service... Sep 13 00:53:36.193877 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.193913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:36.194864 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:53:36.196666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:53:36.198786 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:53:36.203338 systemd[1]: Starting systemd-sysext.service... Sep 13 00:53:36.203762 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Sep 13 00:53:36.205112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:53:36.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.207266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:53:36.214039 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:53:36.218912 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:53:36.219098 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:53:36.230251 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:53:36.314526 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Sep 13 00:53:36.314526 systemd-fsck[1060]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:53:36.316390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:53:36.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.319716 systemd[1]: Mounting boot.mount... Sep 13 00:53:36.332534 systemd[1]: Mounted boot.mount. Sep 13 00:53:36.549323 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:53:36.550037 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:53:36.551234 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:53:36.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.553306 kernel: kauditd_printk_skb: 232 callbacks suppressed Sep 13 00:53:36.553424 kernel: audit: type=1130 audit(1757724816.552:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.553341 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:53:36.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.563237 kernel: audit: type=1130 audit(1757724816.558:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.579238 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:53:36.584166 (sd-sysext)[1065]: Using extensions 'kubernetes'. Sep 13 00:53:36.584659 (sd-sysext)[1065]: Merged extensions into '/usr'. Sep 13 00:53:36.603861 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:36.606070 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:53:36.607442 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.609652 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:36.612504 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:36.614804 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:36.615889 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.616016 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:36.616120 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:36.618692 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:53:36.619829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:36.619981 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:36.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.621523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:36.621672 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:36.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.629673 kernel: audit: type=1130 audit(1757724816.620:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.629734 kernel: audit: type=1131 audit(1757724816.620:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.638951 kernel: audit: type=1130 audit(1757724816.630:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.639014 kernel: audit: type=1131 audit(1757724816.630:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.630929 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:36.631038 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:36.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.641340 systemd[1]: Finished systemd-sysext.service. Sep 13 00:53:36.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.644217 kernel: audit: type=1130 audit(1757724816.639:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.644280 kernel: audit: type=1131 audit(1757724816.639:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.649145 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:53:36.651144 systemd[1]: Starting ensure-sysext.service... Sep 13 00:53:36.652219 kernel: audit: type=1130 audit(1757724816.648:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.652841 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:36.652916 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.654194 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:53:36.660783 systemd[1]: Finished ldconfig.service. Sep 13 00:53:36.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.661818 systemd[1]: Reloading. Sep 13 00:53:36.665247 kernel: audit: type=1130 audit(1757724816.661:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.670342 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:53:36.672100 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:53:36.674103 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:53:36.727604 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-09-13T00:53:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:36.727992 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-09-13T00:53:36Z" level=info msg="torcx already run" Sep 13 00:53:36.795636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:53:36.795658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:53:36.815558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:53:36.878000 audit: BPF prog-id=27 op=LOAD Sep 13 00:53:36.878000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:53:36.878000 audit: BPF prog-id=28 op=LOAD Sep 13 00:53:36.878000 audit: BPF prog-id=29 op=LOAD Sep 13 00:53:36.878000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:53:36.879000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:53:36.879000 audit: BPF prog-id=30 op=LOAD Sep 13 00:53:36.879000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:53:36.879000 audit: BPF prog-id=31 op=LOAD Sep 13 00:53:36.880000 audit: BPF prog-id=32 op=LOAD Sep 13 00:53:36.880000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:53:36.880000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:53:36.880000 audit: BPF prog-id=33 op=LOAD Sep 13 00:53:36.880000 audit: BPF prog-id=34 op=LOAD Sep 13 00:53:36.880000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:53:36.880000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:53:36.881000 audit: BPF prog-id=35 op=LOAD Sep 13 00:53:36.881000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:53:36.884501 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:53:36.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.889690 systemd[1]: Starting audit-rules.service... Sep 13 00:53:36.891812 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:53:36.893872 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:53:36.895000 audit: BPF prog-id=36 op=LOAD Sep 13 00:53:36.896443 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:36.899000 audit: BPF prog-id=37 op=LOAD Sep 13 00:53:36.900577 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:53:36.902301 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:53:36.903809 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:53:36.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.906899 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:36.909027 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.910154 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:36.911739 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:36.913321 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:36.914027 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.914123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:36.914223 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:36.914909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:36.915009 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:36.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:36.917619 augenrules[1157]: No rules Sep 13 00:53:36.917000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:53:36.917000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb9af70c0 a2=420 a3=0 items=0 ppid=1134 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:53:36.917000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:53:36.917316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:36.917441 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:36.918707 systemd[1]: Finished audit-rules.service. Sep 13 00:53:36.919834 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:36.919948 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:36.921093 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:36.921187 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.924659 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:53:36.926913 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:53:36.928176 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.929508 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:36.931107 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:36.932810 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:36.933517 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.933611 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:36.934768 systemd[1]: Starting systemd-update-done.service... Sep 13 00:53:36.935514 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:36.936569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:36.936703 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:36.937837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:36.937950 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:36.939069 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:36.939165 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:36.940260 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:36.940358 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.942483 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.944011 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:53:36.945725 systemd[1]: Starting modprobe@drm.service... Sep 13 00:53:36.947461 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:53:36.949220 systemd[1]: Starting modprobe@loop.service... Sep 13 00:53:36.949966 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.950070 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:36.951300 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:53:36.952339 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:53:36.953311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:53:36.953423 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:53:36.954570 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:53:36.954681 systemd[1]: Finished modprobe@drm.service. Sep 13 00:53:36.955728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:53:36.955833 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:53:36.957752 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:53:36.957880 systemd[1]: Finished modprobe@loop.service. Sep 13 00:53:36.958999 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:53:36.959091 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:53:36.961527 systemd[1]: Finished ensure-sysext.service. Sep 13 00:53:37.041452 systemd[1]: Finished systemd-update-done.service. Sep 13 00:53:37.081584 systemd-resolved[1138]: Positive Trust Anchors: Sep 13 00:53:37.081595 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:37.081621 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:37.082853 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:53:37.084220 systemd[1]: Reached target time-set.target. Sep 13 00:53:37.084410 systemd-timesyncd[1144]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:53:37.084464 systemd-timesyncd[1144]: Initial clock synchronization to Sat 2025-09-13 00:53:37.415606 UTC. Sep 13 00:53:37.089439 systemd-resolved[1138]: Defaulting to hostname 'linux'. Sep 13 00:53:37.090854 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:37.091792 systemd[1]: Reached target network.target. Sep 13 00:53:37.092613 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:37.093500 systemd[1]: Reached target sysinit.target. Sep 13 00:53:37.094403 systemd[1]: Started motdgen.path. Sep 13 00:53:37.095156 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:53:37.096446 systemd[1]: Started logrotate.timer. Sep 13 00:53:37.097269 systemd[1]: Started mdadm.timer. Sep 13 00:53:37.097974 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:53:37.098862 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:53:37.098895 systemd[1]: Reached target paths.target. Sep 13 00:53:37.099676 systemd[1]: Reached target timers.target. Sep 13 00:53:37.100464 systemd-networkd[1022]: eth0: Gained IPv6LL Sep 13 00:53:37.101056 systemd[1]: Listening on dbus.socket. Sep 13 00:53:37.102887 systemd[1]: Starting docker.socket... Sep 13 00:53:37.105521 systemd[1]: Listening on sshd.socket. Sep 13 00:53:37.106429 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:37.107073 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:53:37.108114 systemd[1]: Listening on docker.socket. Sep 13 00:53:37.108921 systemd[1]: Reached target network-online.target. Sep 13 00:53:37.109745 systemd[1]: Reached target sockets.target. Sep 13 00:53:37.110507 systemd[1]: Reached target basic.target. Sep 13 00:53:37.111279 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:37.111305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:53:37.112452 systemd[1]: Starting containerd.service... Sep 13 00:53:37.114137 systemd[1]: Starting dbus.service... Sep 13 00:53:37.115813 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:53:37.117609 systemd[1]: Starting extend-filesystems.service... Sep 13 00:53:37.118980 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:53:37.120540 systemd[1]: Starting kubelet.service... Sep 13 00:53:37.120750 jq[1176]: false Sep 13 00:53:37.123120 systemd[1]: Starting motdgen.service... Sep 13 00:53:37.125380 systemd[1]: Starting prepare-helm.service... Sep 13 00:53:37.127723 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:53:37.131279 systemd[1]: Starting sshd-keygen.service... Sep 13 00:53:37.135186 systemd[1]: Starting systemd-logind.service... Sep 13 00:53:37.136308 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:53:37.136414 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:53:37.136880 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:53:37.137057 extend-filesystems[1177]: Found loop1 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found sr0 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda1 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda2 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda3 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found usr Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda4 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda6 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda7 Sep 13 00:53:37.138163 extend-filesystems[1177]: Found vda9 Sep 13 00:53:37.138163 extend-filesystems[1177]: Checking size of /dev/vda9 Sep 13 00:53:37.152905 extend-filesystems[1177]: Resized partition /dev/vda9 Sep 13 00:53:37.149060 dbus-daemon[1175]: [system] SELinux support is enabled Sep 13 00:53:37.140940 systemd[1]: Starting update-engine.service... Sep 13 00:53:37.154962 extend-filesystems[1201]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:53:37.145797 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:53:37.150257 systemd[1]: Started dbus.service. Sep 13 00:53:37.156186 jq[1198]: true Sep 13 00:53:37.159321 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:53:37.162122 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:53:37.162495 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:53:37.163929 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:53:37.164157 systemd[1]: Finished motdgen.service. Sep 13 00:53:37.166552 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:53:37.166954 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:53:37.172649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:53:37.172696 systemd[1]: Reached target system-config.target. Sep 13 00:53:37.174745 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:53:37.174769 systemd[1]: Reached target user-config.target. Sep 13 00:53:37.175579 jq[1205]: true Sep 13 00:53:37.192029 tar[1204]: linux-amd64/helm Sep 13 00:53:37.214043 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:53:37.219399 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:37.219415 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:53:37.283472 extend-filesystems[1201]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:53:37.283472 extend-filesystems[1201]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:53:37.283472 extend-filesystems[1201]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:53:37.287592 extend-filesystems[1177]: Resized filesystem in /dev/vda9 Sep 13 00:53:37.288402 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:53:37.288419 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:53:37.289185 systemd-logind[1191]: New seat seat0. Sep 13 00:53:37.290905 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:53:37.291079 systemd[1]: Finished extend-filesystems.service. Sep 13 00:53:37.293261 env[1206]: time="2025-09-13T00:53:37.292652731Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:53:37.294365 systemd[1]: Started systemd-logind.service. Sep 13 00:53:37.317173 update_engine[1195]: I0913 00:53:37.315842 1195 main.cc:92] Flatcar Update Engine starting Sep 13 00:53:37.318794 systemd[1]: Started update-engine.service. Sep 13 00:53:37.319753 update_engine[1195]: I0913 00:53:37.318997 1195 update_check_scheduler.cc:74] Next update check in 4m36s Sep 13 00:53:37.320785 env[1206]: time="2025-09-13T00:53:37.320735376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:53:37.321387 env[1206]: time="2025-09-13T00:53:37.321368783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.321653 systemd[1]: Started locksmithd.service. Sep 13 00:53:37.322505 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:53:37.323081 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:53:37.325096 env[1206]: time="2025-09-13T00:53:37.325069402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:37.325178 env[1206]: time="2025-09-13T00:53:37.325159782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.325481 env[1206]: time="2025-09-13T00:53:37.325460946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:37.325570 env[1206]: time="2025-09-13T00:53:37.325550915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.325650 env[1206]: time="2025-09-13T00:53:37.325629803Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:53:37.325733 env[1206]: time="2025-09-13T00:53:37.325713560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.325882 env[1206]: time="2025-09-13T00:53:37.325862669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.326229 env[1206]: time="2025-09-13T00:53:37.326211233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:53:37.326426 env[1206]: time="2025-09-13T00:53:37.326405036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:53:37.326515 env[1206]: time="2025-09-13T00:53:37.326496969Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:53:37.326640 env[1206]: time="2025-09-13T00:53:37.326616112Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:53:37.326716 env[1206]: time="2025-09-13T00:53:37.326697415Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:53:37.336929 env[1206]: time="2025-09-13T00:53:37.336880899Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:53:37.337001 env[1206]: time="2025-09-13T00:53:37.336933527Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:53:37.337001 env[1206]: time="2025-09-13T00:53:37.336947133Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:53:37.337001 env[1206]: time="2025-09-13T00:53:37.336988571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337060 env[1206]: time="2025-09-13T00:53:37.337004140Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337060 env[1206]: time="2025-09-13T00:53:37.337017254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337060 env[1206]: time="2025-09-13T00:53:37.337032232Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337060 env[1206]: time="2025-09-13T00:53:37.337046860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337060 env[1206]: time="2025-09-13T00:53:37.337059053Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337165 env[1206]: time="2025-09-13T00:53:37.337073189Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337165 env[1206]: time="2025-09-13T00:53:37.337086644Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337165 env[1206]: time="2025-09-13T00:53:37.337101703Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:53:37.337257 env[1206]: time="2025-09-13T00:53:37.337226527Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:53:37.337344 env[1206]: time="2025-09-13T00:53:37.337318459Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:53:37.337583 env[1206]: time="2025-09-13T00:53:37.337557948Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:53:37.337629 env[1206]: time="2025-09-13T00:53:37.337587473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337629 env[1206]: time="2025-09-13T00:53:37.337600277Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:53:37.337668 env[1206]: time="2025-09-13T00:53:37.337646664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337668 env[1206]: time="2025-09-13T00:53:37.337660961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337708 env[1206]: time="2025-09-13T00:53:37.337671651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337708 env[1206]: time="2025-09-13T00:53:37.337682371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337708 env[1206]: time="2025-09-13T00:53:37.337693112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337779 env[1206]: time="2025-09-13T00:53:37.337704493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337779 env[1206]: time="2025-09-13T00:53:37.337728969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337779 env[1206]: time="2025-09-13T00:53:37.337739118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337779 env[1206]: time="2025-09-13T00:53:37.337754426Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:53:37.337903 env[1206]: time="2025-09-13T00:53:37.337879882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337903 env[1206]: time="2025-09-13T00:53:37.337900080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337965 env[1206]: time="2025-09-13T00:53:37.337919877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.337965 env[1206]: time="2025-09-13T00:53:37.337930867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:53:37.337965 env[1206]: time="2025-09-13T00:53:37.337944212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:53:37.337965 env[1206]: time="2025-09-13T00:53:37.337956345Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:53:37.338054 env[1206]: time="2025-09-13T00:53:37.337977304Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:53:37.338054 env[1206]: time="2025-09-13T00:53:37.338016598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:53:37.338279 env[1206]: time="2025-09-13T00:53:37.338210932Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.338286073Z" level=info msg="Connect containerd service" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.338323303Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.338910634Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.339108866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.339138962Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.339180240Z" level=info msg="containerd successfully booted in 0.052227s" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.367470093Z" level=info msg="Start subscribing containerd event" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.368171158Z" level=info msg="Start recovering state" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.369360448Z" level=info msg="Start event monitor" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.369553740Z" level=info msg="Start snapshots syncer" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.369598454Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:53:37.387749 env[1206]: time="2025-09-13T00:53:37.369614855Z" level=info msg="Start streaming server" Sep 13 00:53:37.339282 systemd[1]: Started containerd.service. Sep 13 00:53:37.459378 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:53:37.820759 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:53:37.843851 systemd[1]: Finished sshd-keygen.service. Sep 13 00:53:37.846431 systemd[1]: Starting issuegen.service... Sep 13 00:53:37.854812 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:53:37.854945 systemd[1]: Finished issuegen.service. Sep 13 00:53:37.857072 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:53:37.899626 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:53:37.902295 systemd[1]: Started getty@tty1.service. Sep 13 00:53:37.904575 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:53:37.905775 systemd[1]: Reached target getty.target. Sep 13 00:53:38.017383 tar[1204]: linux-amd64/LICENSE Sep 13 00:53:38.017533 tar[1204]: linux-amd64/README.md Sep 13 00:53:38.023911 systemd[1]: Finished prepare-helm.service. Sep 13 00:53:39.003490 systemd[1]: Started kubelet.service. Sep 13 00:53:39.004854 systemd[1]: Reached target multi-user.target. Sep 13 00:53:39.007313 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:53:39.019228 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:53:39.019477 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:53:39.020739 systemd[1]: Startup finished in 914ms (kernel) + 5.059s (initrd) + 7.032s (userspace) = 13.006s. Sep 13 00:53:39.752045 kubelet[1259]: E0913 00:53:39.751946 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:39.753772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:39.753898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:39.754132 systemd[1]: kubelet.service: Consumed 2.259s CPU time. Sep 13 00:53:40.423791 systemd[1]: Created slice system-sshd.slice. Sep 13 00:53:40.425152 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:46330.service. Sep 13 00:53:40.470603 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 46330 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:40.472345 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.482867 systemd-logind[1191]: New session 1 of user core. Sep 13 00:53:40.484012 systemd[1]: Created slice user-500.slice. Sep 13 00:53:40.485423 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:53:40.494421 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:53:40.496191 systemd[1]: Starting user@500.service... Sep 13 00:53:40.499176 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.572355 systemd[1271]: Queued start job for default target default.target. Sep 13 00:53:40.572883 systemd[1271]: Reached target paths.target. Sep 13 00:53:40.572903 systemd[1271]: Reached target sockets.target. Sep 13 00:53:40.572915 systemd[1271]: Reached target timers.target. Sep 13 00:53:40.572926 systemd[1271]: Reached target basic.target. Sep 13 00:53:40.572963 systemd[1271]: Reached target default.target. Sep 13 00:53:40.572987 systemd[1271]: Startup finished in 67ms. Sep 13 00:53:40.573053 systemd[1]: Started user@500.service. Sep 13 00:53:40.574038 systemd[1]: Started session-1.scope. Sep 13 00:53:40.626742 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:46334.service. Sep 13 00:53:40.671315 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 46334 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:40.672958 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.676797 systemd-logind[1191]: New session 2 of user core. Sep 13 00:53:40.677825 systemd[1]: Started session-2.scope. Sep 13 00:53:40.732792 sshd[1280]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:40.735441 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:46334.service: Deactivated successfully. Sep 13 00:53:40.735991 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:53:40.736473 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:53:40.737538 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:46336.service. Sep 13 00:53:40.738358 systemd-logind[1191]: Removed session 2. Sep 13 00:53:40.776870 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:40.778168 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.781621 systemd-logind[1191]: New session 3 of user core. Sep 13 00:53:40.782550 systemd[1]: Started session-3.scope. Sep 13 00:53:40.834590 sshd[1286]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:40.837847 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:46336.service: Deactivated successfully. Sep 13 00:53:40.838473 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:53:40.838999 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:53:40.840300 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:46340.service. Sep 13 00:53:40.841159 systemd-logind[1191]: Removed session 3. Sep 13 00:53:40.878583 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 46340 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:40.880022 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.883850 systemd-logind[1191]: New session 4 of user core. Sep 13 00:53:40.884817 systemd[1]: Started session-4.scope. Sep 13 00:53:40.940721 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 13 00:53:40.943755 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:46340.service: Deactivated successfully. Sep 13 00:53:40.944294 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:53:40.944881 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:53:40.945957 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:46344.service. Sep 13 00:53:40.946749 systemd-logind[1191]: Removed session 4. Sep 13 00:53:40.986525 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 46344 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:53:40.988031 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:53:40.992305 systemd-logind[1191]: New session 5 of user core. Sep 13 00:53:40.993297 systemd[1]: Started session-5.scope. Sep 13 00:53:41.052071 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:53:41.052313 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:53:41.154325 systemd[1]: Starting docker.service... Sep 13 00:53:41.215243 env[1313]: time="2025-09-13T00:53:41.214675200Z" level=info msg="Starting up" Sep 13 00:53:41.216661 env[1313]: time="2025-09-13T00:53:41.216624636Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:41.216661 env[1313]: time="2025-09-13T00:53:41.216641429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:41.216661 env[1313]: time="2025-09-13T00:53:41.216660314Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:41.216790 env[1313]: time="2025-09-13T00:53:41.216670096Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:41.219171 env[1313]: time="2025-09-13T00:53:41.219126283Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:53:41.219171 env[1313]: time="2025-09-13T00:53:41.219153724Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:53:41.219171 env[1313]: time="2025-09-13T00:53:41.219170631Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:53:41.219337 env[1313]: time="2025-09-13T00:53:41.219192687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:53:42.081915 env[1313]: time="2025-09-13T00:53:42.081844447Z" level=info msg="Loading containers: start." Sep 13 00:53:42.213263 kernel: Initializing XFRM netlink socket Sep 13 00:53:42.246491 env[1313]: time="2025-09-13T00:53:42.246430752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:53:42.297031 systemd-networkd[1022]: docker0: Link UP Sep 13 00:53:42.313731 env[1313]: time="2025-09-13T00:53:42.313677249Z" level=info msg="Loading containers: done." Sep 13 00:53:42.331354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck416903084-merged.mount: Deactivated successfully. Sep 13 00:53:42.333561 env[1313]: time="2025-09-13T00:53:42.333438079Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:53:42.333769 env[1313]: time="2025-09-13T00:53:42.333637448Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:53:42.333769 env[1313]: time="2025-09-13T00:53:42.333720283Z" level=info msg="Daemon has completed initialization" Sep 13 00:53:42.351772 systemd[1]: Started docker.service. Sep 13 00:53:42.357900 env[1313]: time="2025-09-13T00:53:42.357843589Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:53:43.421908 env[1206]: time="2025-09-13T00:53:43.421846239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:53:44.062473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969619622.mount: Deactivated successfully. Sep 13 00:53:45.701931 env[1206]: time="2025-09-13T00:53:45.701849600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.703772 env[1206]: time="2025-09-13T00:53:45.703714673Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.705680 env[1206]: time="2025-09-13T00:53:45.705630534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.707487 env[1206]: time="2025-09-13T00:53:45.707435107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:45.708027 env[1206]: time="2025-09-13T00:53:45.707996863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:53:45.708748 env[1206]: time="2025-09-13T00:53:45.708726126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:53:47.629760 env[1206]: time="2025-09-13T00:53:47.629683061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.631802 env[1206]: time="2025-09-13T00:53:47.631772737Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.634049 env[1206]: time="2025-09-13T00:53:47.633992452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.635783 env[1206]: time="2025-09-13T00:53:47.635746389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:47.636552 env[1206]: time="2025-09-13T00:53:47.636498125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:53:47.637659 env[1206]: time="2025-09-13T00:53:47.637628082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:53:49.475146 env[1206]: time="2025-09-13T00:53:49.474862061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:49.477145 env[1206]: time="2025-09-13T00:53:49.477063516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:49.479248 env[1206]: time="2025-09-13T00:53:49.479177765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:49.480965 env[1206]: time="2025-09-13T00:53:49.480900450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:49.481574 env[1206]: time="2025-09-13T00:53:49.481540097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:53:49.482640 env[1206]: time="2025-09-13T00:53:49.482538731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:53:49.864703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:53:49.865112 systemd[1]: Stopped kubelet.service. Sep 13 00:53:49.865190 systemd[1]: kubelet.service: Consumed 2.259s CPU time. Sep 13 00:53:49.867963 systemd[1]: Starting kubelet.service... Sep 13 00:53:50.059747 systemd[1]: Started kubelet.service. Sep 13 00:53:50.235986 kubelet[1447]: E0913 00:53:50.235809 1447 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:53:50.239339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:53:50.239482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:53:50.915076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291494691.mount: Deactivated successfully. Sep 13 00:53:51.832920 env[1206]: time="2025-09-13T00:53:51.832833635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.835064 env[1206]: time="2025-09-13T00:53:51.835006099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.836664 env[1206]: time="2025-09-13T00:53:51.836641137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.838247 env[1206]: time="2025-09-13T00:53:51.838192869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:51.838692 env[1206]: time="2025-09-13T00:53:51.838649063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:53:51.839477 env[1206]: time="2025-09-13T00:53:51.839452237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:53:52.416113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358287087.mount: Deactivated successfully. Sep 13 00:53:53.585275 env[1206]: time="2025-09-13T00:53:53.585190455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:53.587218 env[1206]: time="2025-09-13T00:53:53.587173602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:53.591176 env[1206]: time="2025-09-13T00:53:53.591123988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:53.592951 env[1206]: time="2025-09-13T00:53:53.592909326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:53.593599 env[1206]: time="2025-09-13T00:53:53.593562081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:53:53.594174 env[1206]: time="2025-09-13T00:53:53.594134166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:53:54.142095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566395090.mount: Deactivated successfully. Sep 13 00:53:54.147557 env[1206]: time="2025-09-13T00:53:54.147501163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:54.149343 env[1206]: time="2025-09-13T00:53:54.149299115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:54.150575 env[1206]: time="2025-09-13T00:53:54.150536766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:54.152030 env[1206]: time="2025-09-13T00:53:54.152003781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:54.152572 env[1206]: time="2025-09-13T00:53:54.152547521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:53:54.153138 env[1206]: time="2025-09-13T00:53:54.153107278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:53:54.663230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494429716.mount: Deactivated successfully. Sep 13 00:53:57.347173 env[1206]: time="2025-09-13T00:53:57.347097073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.349031 env[1206]: time="2025-09-13T00:53:57.348999358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.350943 env[1206]: time="2025-09-13T00:53:57.350922490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.352834 env[1206]: time="2025-09-13T00:53:57.352806572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:53:57.353596 env[1206]: time="2025-09-13T00:53:57.353563990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:53:59.758401 systemd[1]: Stopped kubelet.service. Sep 13 00:53:59.760517 systemd[1]: Starting kubelet.service... Sep 13 00:53:59.781116 systemd[1]: Reloading. Sep 13 00:53:59.855823 /usr/lib/systemd/system-generators/torcx-generator[1501]: time="2025-09-13T00:53:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:53:59.855855 /usr/lib/systemd/system-generators/torcx-generator[1501]: time="2025-09-13T00:53:59Z" level=info msg="torcx already run" Sep 13 00:54:00.160067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:00.160087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:00.176771 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:00.253131 systemd[1]: Started kubelet.service. Sep 13 00:54:00.254602 systemd[1]: Stopping kubelet.service... Sep 13 00:54:00.254832 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:00.254994 systemd[1]: Stopped kubelet.service. Sep 13 00:54:00.256448 systemd[1]: Starting kubelet.service... Sep 13 00:54:00.346423 systemd[1]: Started kubelet.service. Sep 13 00:54:00.379972 kubelet[1549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:00.379972 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:00.379972 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:00.380347 kubelet[1549]: I0913 00:54:00.380030 1549 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:00.761944 kubelet[1549]: I0913 00:54:00.761910 1549 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:54:00.761944 kubelet[1549]: I0913 00:54:00.761935 1549 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:00.762210 kubelet[1549]: I0913 00:54:00.762175 1549 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:54:00.779252 kubelet[1549]: E0913 00:54:00.779183 1549 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:00.780116 kubelet[1549]: I0913 00:54:00.780080 1549 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:00.785459 kubelet[1549]: E0913 00:54:00.785422 1549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:00.785459 kubelet[1549]: I0913 00:54:00.785450 1549 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:00.790478 kubelet[1549]: I0913 00:54:00.790460 1549 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:00.790577 kubelet[1549]: I0913 00:54:00.790563 1549 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:54:00.790722 kubelet[1549]: I0913 00:54:00.790695 1549 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:00.790904 kubelet[1549]: I0913 00:54:00.790721 1549 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:54:00.791012 kubelet[1549]: I0913 00:54:00.790921 1549 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:00.791012 kubelet[1549]: I0913 00:54:00.790930 1549 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:54:00.791061 kubelet[1549]: I0913 00:54:00.791045 1549 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:00.796834 kubelet[1549]: I0913 00:54:00.796808 1549 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:54:00.796888 kubelet[1549]: I0913 00:54:00.796850 1549 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:00.796918 kubelet[1549]: I0913 00:54:00.796893 1549 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:54:00.796943 kubelet[1549]: I0913 00:54:00.796920 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:00.813615 kubelet[1549]: I0913 00:54:00.813579 1549 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:00.814234 kubelet[1549]: I0913 00:54:00.814024 1549 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:00.814402 kubelet[1549]: W0913 00:54:00.814277 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:00.814402 kubelet[1549]: E0913 00:54:00.814337 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:00.814812 kubelet[1549]: W0913 00:54:00.814757 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:00.814865 kubelet[1549]: E0913 00:54:00.814822 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:00.814975 kubelet[1549]: W0913 00:54:00.814956 1549 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:54:00.818708 kubelet[1549]: I0913 00:54:00.818685 1549 server.go:1274] "Started kubelet" Sep 13 00:54:00.819132 kubelet[1549]: I0913 00:54:00.819055 1549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:00.819415 kubelet[1549]: I0913 00:54:00.819382 1549 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:00.819468 kubelet[1549]: I0913 00:54:00.819456 1549 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:00.821184 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:54:00.821353 kubelet[1549]: I0913 00:54:00.821305 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:00.821353 kubelet[1549]: I0913 00:54:00.821356 1549 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:54:00.822152 kubelet[1549]: I0913 00:54:00.821739 1549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:00.826183 kubelet[1549]: I0913 00:54:00.826154 1549 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:54:00.826869 kubelet[1549]: E0913 00:54:00.826361 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:00.826869 kubelet[1549]: I0913 00:54:00.826412 1549 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:54:00.826869 kubelet[1549]: I0913 00:54:00.826483 1549 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:00.826869 kubelet[1549]: W0913 00:54:00.826715 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:00.826869 kubelet[1549]: E0913 00:54:00.826747 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:00.827020 kubelet[1549]: E0913 00:54:00.826928 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Sep 13 00:54:00.827137 kubelet[1549]: I0913 00:54:00.827116 1549 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:00.827245 kubelet[1549]: I0913 00:54:00.827187 1549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:00.827632 kubelet[1549]: E0913 00:54:00.824533 1549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b16e0de03413 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:54:00.818652179 +0000 UTC m=+0.468784782,LastTimestamp:2025-09-13 00:54:00.818652179 +0000 UTC m=+0.468784782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:54:00.828172 kubelet[1549]: I0913 00:54:00.828142 1549 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:00.828674 kubelet[1549]: E0913 00:54:00.828635 1549 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:00.836052 kubelet[1549]: I0913 00:54:00.836014 1549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:00.837067 kubelet[1549]: I0913 00:54:00.836966 1549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:00.837067 kubelet[1549]: I0913 00:54:00.837001 1549 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:54:00.837067 kubelet[1549]: I0913 00:54:00.837030 1549 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:54:00.837214 kubelet[1549]: E0913 00:54:00.837077 1549 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:00.840902 kubelet[1549]: I0913 00:54:00.840883 1549 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:54:00.840902 kubelet[1549]: I0913 00:54:00.840897 1549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:00.840972 kubelet[1549]: I0913 00:54:00.840915 1549 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:00.840972 kubelet[1549]: W0913 00:54:00.840931 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:00.841027 kubelet[1549]: E0913 00:54:00.840977 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:00.927476 kubelet[1549]: E0913 00:54:00.927440 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:00.937673 kubelet[1549]: E0913 00:54:00.937643 1549 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:54:01.027341 kubelet[1549]: E0913 00:54:01.027259 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Sep 13 00:54:01.028271 kubelet[1549]: E0913 00:54:01.028246 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:01.113533 kubelet[1549]: I0913 00:54:01.113493 1549 policy_none.go:49] "None policy: Start" Sep 13 00:54:01.114132 kubelet[1549]: I0913 00:54:01.114107 1549 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:54:01.114223 kubelet[1549]: I0913 00:54:01.114144 1549 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:01.120878 systemd[1]: Created slice kubepods.slice. Sep 13 00:54:01.125010 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:54:01.127331 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:54:01.128990 kubelet[1549]: E0913 00:54:01.128966 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:01.131895 kubelet[1549]: I0913 00:54:01.131857 1549 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:01.132145 kubelet[1549]: I0913 00:54:01.132029 1549 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:01.132145 kubelet[1549]: I0913 00:54:01.132051 1549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:01.132897 kubelet[1549]: I0913 00:54:01.132327 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:01.133401 kubelet[1549]: E0913 00:54:01.133369 1549 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:54:01.143350 systemd[1]: Created slice kubepods-burstable-pod6b0b11662645adc289256c065235b2f8.slice. Sep 13 00:54:01.158076 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 13 00:54:01.169047 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 13 00:54:01.229808 kubelet[1549]: I0913 00:54:01.229737 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:01.229808 kubelet[1549]: I0913 00:54:01.229805 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:01.229934 kubelet[1549]: I0913 00:54:01.229831 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:01.229934 kubelet[1549]: I0913 00:54:01.229857 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:01.229934 kubelet[1549]: I0913 00:54:01.229872 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:01.229934 kubelet[1549]: I0913 00:54:01.229886 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:01.229934 kubelet[1549]: I0913 00:54:01.229899 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:01.230058 kubelet[1549]: I0913 00:54:01.229912 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:01.230058 kubelet[1549]: I0913 00:54:01.229929 1549 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:01.233700 kubelet[1549]: I0913 00:54:01.233675 1549 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:54:01.234084 kubelet[1549]: E0913 00:54:01.234040 1549 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 13 00:54:01.427834 kubelet[1549]: E0913 00:54:01.427785 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Sep 13 00:54:01.435898 kubelet[1549]: I0913 00:54:01.435868 1549 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:54:01.436184 kubelet[1549]: E0913 00:54:01.436143 1549 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 13 00:54:01.457599 kubelet[1549]: E0913 00:54:01.457539 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:01.458340 env[1206]: time="2025-09-13T00:54:01.458284680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6b0b11662645adc289256c065235b2f8,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:01.468502 kubelet[1549]: E0913 00:54:01.468437 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:01.468912 env[1206]: time="2025-09-13T00:54:01.468872954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:01.471164 kubelet[1549]: E0913 00:54:01.471127 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:01.471686 env[1206]: time="2025-09-13T00:54:01.471633670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:01.838387 kubelet[1549]: I0913 00:54:01.838268 1549 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:54:01.838621 kubelet[1549]: E0913 00:54:01.838596 1549 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Sep 13 00:54:01.911013 kubelet[1549]: W0913 00:54:01.910939 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:01.911013 kubelet[1549]: E0913 00:54:01.911013 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:02.095931 kubelet[1549]: W0913 00:54:02.095775 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:02.095931 kubelet[1549]: E0913 00:54:02.095851 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:02.196372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167766435.mount: Deactivated successfully. Sep 13 00:54:02.200580 env[1206]: time="2025-09-13T00:54:02.200547986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.201373 kubelet[1549]: W0913 00:54:02.201313 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:02.201438 kubelet[1549]: E0913 00:54:02.201394 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:02.203101 env[1206]: time="2025-09-13T00:54:02.203063905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.207220 env[1206]: time="2025-09-13T00:54:02.207169325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.210614 env[1206]: time="2025-09-13T00:54:02.210577500Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.211940 env[1206]: time="2025-09-13T00:54:02.211893481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.213225 env[1206]: time="2025-09-13T00:54:02.213186389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.215256 env[1206]: time="2025-09-13T00:54:02.215232481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.217301 env[1206]: time="2025-09-13T00:54:02.217278722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.219576 env[1206]: time="2025-09-13T00:54:02.219550534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.220960 env[1206]: time="2025-09-13T00:54:02.220937979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.221687 env[1206]: time="2025-09-13T00:54:02.221645982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.222400 env[1206]: time="2025-09-13T00:54:02.222371817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:02.229533 kubelet[1549]: E0913 00:54:02.229481 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Sep 13 00:54:02.244588 env[1206]: time="2025-09-13T00:54:02.244348852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:02.244588 env[1206]: time="2025-09-13T00:54:02.244415860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:02.244588 env[1206]: time="2025-09-13T00:54:02.244429268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:02.245049 env[1206]: time="2025-09-13T00:54:02.244954382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9efc6ec00800cfdc6878a3088b8dbb4fb9dee3d1c0bea7402e94a8e1ccc13424 pid=1595 runtime=io.containerd.runc.v2 Sep 13 00:54:02.246551 env[1206]: time="2025-09-13T00:54:02.245573729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:02.246551 env[1206]: time="2025-09-13T00:54:02.245638148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:02.246551 env[1206]: time="2025-09-13T00:54:02.245661360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:02.246551 env[1206]: time="2025-09-13T00:54:02.245787839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7387aabefbd423f873ce94e7b815b1731ad21c14d45ad7a04e90fb97b2760bc7 pid=1607 runtime=io.containerd.runc.v2 Sep 13 00:54:02.249949 env[1206]: time="2025-09-13T00:54:02.249870077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:02.250091 env[1206]: time="2025-09-13T00:54:02.249963478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:02.250091 env[1206]: time="2025-09-13T00:54:02.249986209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:02.250255 env[1206]: time="2025-09-13T00:54:02.250195271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86d1d55f2fdf4413cbbe785a6a0cab9adf2b4c610cd606287dacd28a0e4f3c55 pid=1622 runtime=io.containerd.runc.v2 Sep 13 00:54:02.262801 systemd[1]: Started cri-containerd-86d1d55f2fdf4413cbbe785a6a0cab9adf2b4c610cd606287dacd28a0e4f3c55.scope. Sep 13 00:54:02.266801 systemd[1]: Started cri-containerd-7387aabefbd423f873ce94e7b815b1731ad21c14d45ad7a04e90fb97b2760bc7.scope. Sep 13 00:54:02.270550 systemd[1]: Started cri-containerd-9efc6ec00800cfdc6878a3088b8dbb4fb9dee3d1c0bea7402e94a8e1ccc13424.scope. Sep 13 00:54:02.306830 env[1206]: time="2025-09-13T00:54:02.306777728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7387aabefbd423f873ce94e7b815b1731ad21c14d45ad7a04e90fb97b2760bc7\"" Sep 13 00:54:02.308115 kubelet[1549]: E0913 00:54:02.308081 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.310744 env[1206]: time="2025-09-13T00:54:02.310710978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6b0b11662645adc289256c065235b2f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efc6ec00800cfdc6878a3088b8dbb4fb9dee3d1c0bea7402e94a8e1ccc13424\"" Sep 13 00:54:02.312586 kubelet[1549]: E0913 00:54:02.312453 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.312660 env[1206]: time="2025-09-13T00:54:02.312369443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"86d1d55f2fdf4413cbbe785a6a0cab9adf2b4c610cd606287dacd28a0e4f3c55\"" Sep 13 00:54:02.313113 env[1206]: time="2025-09-13T00:54:02.313090542Z" level=info msg="CreateContainer within sandbox \"7387aabefbd423f873ce94e7b815b1731ad21c14d45ad7a04e90fb97b2760bc7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:54:02.313168 kubelet[1549]: E0913 00:54:02.313116 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.314573 env[1206]: time="2025-09-13T00:54:02.314538501Z" level=info msg="CreateContainer within sandbox \"9efc6ec00800cfdc6878a3088b8dbb4fb9dee3d1c0bea7402e94a8e1ccc13424\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:54:02.315363 env[1206]: time="2025-09-13T00:54:02.315308744Z" level=info msg="CreateContainer within sandbox \"86d1d55f2fdf4413cbbe785a6a0cab9adf2b4c610cd606287dacd28a0e4f3c55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:54:02.332052 env[1206]: time="2025-09-13T00:54:02.332007356Z" level=info msg="CreateContainer within sandbox \"7387aabefbd423f873ce94e7b815b1731ad21c14d45ad7a04e90fb97b2760bc7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb4176223ca77c31236356d115d8a83d015529443927f61e4c65fda158c24ffb\"" Sep 13 00:54:02.333119 env[1206]: time="2025-09-13T00:54:02.333076932Z" level=info msg="StartContainer for \"bb4176223ca77c31236356d115d8a83d015529443927f61e4c65fda158c24ffb\"" Sep 13 00:54:02.337157 env[1206]: time="2025-09-13T00:54:02.337129666Z" level=info msg="CreateContainer within sandbox \"86d1d55f2fdf4413cbbe785a6a0cab9adf2b4c610cd606287dacd28a0e4f3c55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc6b982e5037142d815b1ee82618674ad779fcea929cd312d55a696297233530\"" Sep 13 00:54:02.338104 env[1206]: time="2025-09-13T00:54:02.338045968Z" level=info msg="StartContainer for \"cc6b982e5037142d815b1ee82618674ad779fcea929cd312d55a696297233530\"" Sep 13 00:54:02.338655 env[1206]: time="2025-09-13T00:54:02.338626067Z" level=info msg="CreateContainer within sandbox \"9efc6ec00800cfdc6878a3088b8dbb4fb9dee3d1c0bea7402e94a8e1ccc13424\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c3967cee54b285dac18851fc8941f302be2895be92a0caa1f14a71714e55413\"" Sep 13 00:54:02.339092 env[1206]: time="2025-09-13T00:54:02.339056543Z" level=info msg="StartContainer for \"6c3967cee54b285dac18851fc8941f302be2895be92a0caa1f14a71714e55413\"" Sep 13 00:54:02.345994 systemd[1]: Started cri-containerd-bb4176223ca77c31236356d115d8a83d015529443927f61e4c65fda158c24ffb.scope. Sep 13 00:54:02.354916 systemd[1]: Started cri-containerd-6c3967cee54b285dac18851fc8941f302be2895be92a0caa1f14a71714e55413.scope. Sep 13 00:54:02.360532 systemd[1]: Started cri-containerd-cc6b982e5037142d815b1ee82618674ad779fcea929cd312d55a696297233530.scope. Sep 13 00:54:02.395745 kubelet[1549]: W0913 00:54:02.395677 1549 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Sep 13 00:54:02.395880 kubelet[1549]: E0913 00:54:02.395761 1549 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:02.508656 env[1206]: time="2025-09-13T00:54:02.508573529Z" level=info msg="StartContainer for \"bb4176223ca77c31236356d115d8a83d015529443927f61e4c65fda158c24ffb\" returns successfully" Sep 13 00:54:02.626018 env[1206]: time="2025-09-13T00:54:02.625973625Z" level=info msg="StartContainer for \"cc6b982e5037142d815b1ee82618674ad779fcea929cd312d55a696297233530\" returns successfully" Sep 13 00:54:02.626217 env[1206]: time="2025-09-13T00:54:02.626005849Z" level=info msg="StartContainer for \"6c3967cee54b285dac18851fc8941f302be2895be92a0caa1f14a71714e55413\" returns successfully" Sep 13 00:54:02.640270 kubelet[1549]: I0913 00:54:02.639920 1549 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:54:02.845660 kubelet[1549]: E0913 00:54:02.845611 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.847796 kubelet[1549]: E0913 00:54:02.847762 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:02.849708 kubelet[1549]: E0913 00:54:02.849681 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:03.430414 kubelet[1549]: I0913 00:54:03.430374 1549 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:54:03.430414 kubelet[1549]: E0913 00:54:03.430409 1549 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:54:03.437864 kubelet[1549]: E0913 00:54:03.437814 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:03.538367 kubelet[1549]: E0913 00:54:03.538317 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:03.638952 kubelet[1549]: E0913 00:54:03.638891 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:03.739910 kubelet[1549]: E0913 00:54:03.739810 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:03.840650 kubelet[1549]: E0913 00:54:03.840618 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:03.851015 kubelet[1549]: E0913 00:54:03.850980 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:03.941070 kubelet[1549]: E0913 00:54:03.941022 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:04.041886 kubelet[1549]: E0913 00:54:04.041796 1549 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:04.799420 kubelet[1549]: I0913 00:54:04.799377 1549 apiserver.go:52] "Watching apiserver" Sep 13 00:54:04.826801 kubelet[1549]: I0913 00:54:04.826732 1549 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:54:05.281802 systemd[1]: Reloading. Sep 13 00:54:05.342185 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2025-09-13T00:54:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:05.342238 /usr/lib/systemd/system-generators/torcx-generator[1854]: time="2025-09-13T00:54:05Z" level=info msg="torcx already run" Sep 13 00:54:05.416567 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:05.416589 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:05.435852 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:05.528924 kubelet[1549]: I0913 00:54:05.528879 1549 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:05.529130 systemd[1]: Stopping kubelet.service... Sep 13 00:54:05.553640 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:05.553872 systemd[1]: Stopped kubelet.service. Sep 13 00:54:05.555631 systemd[1]: Starting kubelet.service... Sep 13 00:54:05.645804 systemd[1]: Started kubelet.service. Sep 13 00:54:05.687219 kubelet[1899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:05.687219 kubelet[1899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:05.687219 kubelet[1899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:05.687682 kubelet[1899]: I0913 00:54:05.687266 1899 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:05.694274 kubelet[1899]: I0913 00:54:05.694237 1899 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:54:05.694274 kubelet[1899]: I0913 00:54:05.694261 1899 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:05.694463 kubelet[1899]: I0913 00:54:05.694436 1899 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:54:05.695559 kubelet[1899]: I0913 00:54:05.695535 1899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:54:05.697160 kubelet[1899]: I0913 00:54:05.697129 1899 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:05.700121 kubelet[1899]: E0913 00:54:05.700097 1899 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:05.700121 kubelet[1899]: I0913 00:54:05.700119 1899 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:05.703464 kubelet[1899]: I0913 00:54:05.703434 1899 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:05.703543 kubelet[1899]: I0913 00:54:05.703528 1899 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:54:05.703653 kubelet[1899]: I0913 00:54:05.703624 1899 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:05.704213 kubelet[1899]: I0913 00:54:05.703648 1899 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704223 1899 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704245 1899 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704285 1899 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704390 1899 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704403 1899 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704430 1899 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:54:05.704454 kubelet[1899]: I0913 00:54:05.704440 1899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:05.705168 kubelet[1899]: I0913 00:54:05.705149 1899 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:05.705523 kubelet[1899]: I0913 00:54:05.705502 1899 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:05.705907 kubelet[1899]: I0913 00:54:05.705890 1899 server.go:1274] "Started kubelet" Sep 13 00:54:05.713427 kubelet[1899]: I0913 00:54:05.709758 1899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:05.713427 kubelet[1899]: I0913 00:54:05.710004 1899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:05.713427 kubelet[1899]: E0913 00:54:05.710097 1899 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:05.713427 kubelet[1899]: I0913 00:54:05.712034 1899 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:54:05.713427 kubelet[1899]: I0913 00:54:05.712213 1899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:05.714127 kubelet[1899]: I0913 00:54:05.713745 1899 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:54:05.714127 kubelet[1899]: I0913 00:54:05.713904 1899 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:54:05.714127 kubelet[1899]: I0913 00:54:05.713930 1899 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:05.714967 kubelet[1899]: I0913 00:54:05.714928 1899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:05.715103 kubelet[1899]: I0913 00:54:05.715078 1899 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:05.718455 kubelet[1899]: I0913 00:54:05.718313 1899 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:05.718455 kubelet[1899]: I0913 00:54:05.718398 1899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:05.719748 kubelet[1899]: I0913 00:54:05.719722 1899 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:05.729064 kubelet[1899]: I0913 00:54:05.729022 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:05.730747 kubelet[1899]: I0913 00:54:05.730709 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:05.730747 kubelet[1899]: I0913 00:54:05.730746 1899 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:54:05.730839 kubelet[1899]: I0913 00:54:05.730773 1899 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:54:05.730866 kubelet[1899]: E0913 00:54:05.730826 1899 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:05.747108 kubelet[1899]: I0913 00:54:05.747085 1899 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:54:05.747325 kubelet[1899]: I0913 00:54:05.747308 1899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:05.747424 kubelet[1899]: I0913 00:54:05.747410 1899 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:05.747623 kubelet[1899]: I0913 00:54:05.747607 1899 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:54:05.747724 kubelet[1899]: I0913 00:54:05.747692 1899 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:54:05.747801 kubelet[1899]: I0913 00:54:05.747787 1899 policy_none.go:49] "None policy: Start" Sep 13 00:54:05.748410 kubelet[1899]: I0913 00:54:05.748397 1899 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:54:05.748504 kubelet[1899]: I0913 00:54:05.748490 1899 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:05.748705 kubelet[1899]: I0913 00:54:05.748691 1899 state_mem.go:75] "Updated machine memory state" Sep 13 00:54:05.752328 kubelet[1899]: I0913 00:54:05.752312 1899 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:05.752686 kubelet[1899]: I0913 00:54:05.752673 1899 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:05.752851 kubelet[1899]: I0913 00:54:05.752820 1899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:05.753176 kubelet[1899]: I0913 00:54:05.753140 1899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:05.857336 kubelet[1899]: I0913 00:54:05.857216 1899 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:54:05.870402 kubelet[1899]: I0913 00:54:05.870352 1899 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:54:05.870612 kubelet[1899]: I0913 00:54:05.870595 1899 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:54:05.915214 kubelet[1899]: I0913 00:54:05.915152 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:05.915214 kubelet[1899]: I0913 00:54:05.915184 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:05.915214 kubelet[1899]: I0913 00:54:05.915218 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:05.915444 kubelet[1899]: I0913 00:54:05.915237 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:05.915444 kubelet[1899]: I0913 00:54:05.915254 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:05.915444 kubelet[1899]: I0913 00:54:05.915268 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:05.915444 kubelet[1899]: I0913 00:54:05.915284 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b0b11662645adc289256c065235b2f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6b0b11662645adc289256c065235b2f8\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:05.915444 kubelet[1899]: I0913 00:54:05.915297 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:05.915577 kubelet[1899]: I0913 00:54:05.915312 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:06.138394 kubelet[1899]: E0913 00:54:06.138361 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.138525 kubelet[1899]: E0913 00:54:06.138397 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.138525 kubelet[1899]: E0913 00:54:06.138505 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.333491 sudo[1934]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:54:06.333693 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:54:06.705568 kubelet[1899]: I0913 00:54:06.705518 1899 apiserver.go:52] "Watching apiserver" Sep 13 00:54:06.714823 kubelet[1899]: I0913 00:54:06.714786 1899 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:54:06.751469 kubelet[1899]: E0913 00:54:06.751429 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:06.751469 kubelet[1899]: E0913 00:54:06.751439 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:06.751726 kubelet[1899]: E0913 00:54:06.751588 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.751726 kubelet[1899]: E0913 00:54:06.751629 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.751726 kubelet[1899]: E0913 00:54:06.751708 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:06.751850 kubelet[1899]: E0913 00:54:06.751788 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:06.769953 kubelet[1899]: I0913 00:54:06.769895 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.769868998 podStartE2EDuration="1.769868998s" podCreationTimestamp="2025-09-13 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:06.769705502 +0000 UTC m=+1.118210521" watchObservedRunningTime="2025-09-13 00:54:06.769868998 +0000 UTC m=+1.118374028" Sep 13 00:54:06.770158 kubelet[1899]: I0913 00:54:06.769994 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.769990536 podStartE2EDuration="1.769990536s" podCreationTimestamp="2025-09-13 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:06.763625993 +0000 UTC m=+1.112131022" watchObservedRunningTime="2025-09-13 00:54:06.769990536 +0000 UTC m=+1.118495565" Sep 13 00:54:06.782690 kubelet[1899]: I0913 00:54:06.782631 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.782611894 podStartE2EDuration="1.782611894s" podCreationTimestamp="2025-09-13 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:06.776093271 +0000 UTC m=+1.124598300" watchObservedRunningTime="2025-09-13 00:54:06.782611894 +0000 UTC m=+1.131116923" Sep 13 00:54:06.783767 sudo[1934]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:07.741584 kubelet[1899]: E0913 00:54:07.741543 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:07.741949 kubelet[1899]: E0913 00:54:07.741882 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:07.742193 kubelet[1899]: E0913 00:54:07.742170 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:08.292526 sudo[1301]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:08.293988 sshd[1298]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:08.296095 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:46344.service: Deactivated successfully. Sep 13 00:54:08.296756 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:54:08.296891 systemd[1]: session-5.scope: Consumed 4.533s CPU time. Sep 13 00:54:08.297419 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:54:08.298232 systemd-logind[1191]: Removed session 5. Sep 13 00:54:08.743231 kubelet[1899]: E0913 00:54:08.743176 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:11.794598 kubelet[1899]: I0913 00:54:11.794557 1899 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:54:11.795037 env[1206]: time="2025-09-13T00:54:11.794848879Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:54:11.795240 kubelet[1899]: I0913 00:54:11.795047 1899 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:54:12.864137 systemd[1]: Created slice kubepods-besteffort-podb0f03ee5_5075_4923_ad6a_d7f5b871b28f.slice. Sep 13 00:54:12.875143 systemd[1]: Created slice kubepods-burstable-pod7a37a289_c013_4e1f_9200_a460b34b5201.slice. Sep 13 00:54:12.900398 systemd[1]: Created slice kubepods-besteffort-podf71b6e1b_4d67_4663_8a18_411034e5bb47.slice. Sep 13 00:54:12.966120 kubelet[1899]: I0913 00:54:12.966050 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-hostproc\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966120 kubelet[1899]: I0913 00:54:12.966105 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-cgroup\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966137 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-net\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966158 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f03ee5-5075-4923-ad6a-d7f5b871b28f-lib-modules\") pod \"kube-proxy-j6kkv\" (UID: \"b0f03ee5-5075-4923-ad6a-d7f5b871b28f\") " pod="kube-system/kube-proxy-j6kkv" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966173 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-lib-modules\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966253 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-xtables-lock\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966305 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0f03ee5-5075-4923-ad6a-d7f5b871b28f-kube-proxy\") pod \"kube-proxy-j6kkv\" (UID: \"b0f03ee5-5075-4923-ad6a-d7f5b871b28f\") " pod="kube-system/kube-proxy-j6kkv" Sep 13 00:54:12.966546 kubelet[1899]: I0913 00:54:12.966322 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-etc-cni-netd\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966711 kubelet[1899]: I0913 00:54:12.966338 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrvj\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-kube-api-access-vfrvj\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966711 kubelet[1899]: I0913 00:54:12.966355 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-bpf-maps\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966711 kubelet[1899]: I0913 00:54:12.966369 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-config-path\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966711 kubelet[1899]: I0913 00:54:12.966385 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmqd\" (UniqueName: \"kubernetes.io/projected/f71b6e1b-4d67-4663-8a18-411034e5bb47-kube-api-access-lbmqd\") pod \"cilium-operator-5d85765b45-hl8jp\" (UID: \"f71b6e1b-4d67-4663-8a18-411034e5bb47\") " pod="kube-system/cilium-operator-5d85765b45-hl8jp" Sep 13 00:54:12.966711 kubelet[1899]: I0913 00:54:12.966405 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f03ee5-5075-4923-ad6a-d7f5b871b28f-xtables-lock\") pod \"kube-proxy-j6kkv\" (UID: \"b0f03ee5-5075-4923-ad6a-d7f5b871b28f\") " pod="kube-system/kube-proxy-j6kkv" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966428 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7v8r\" (UniqueName: \"kubernetes.io/projected/b0f03ee5-5075-4923-ad6a-d7f5b871b28f-kube-api-access-c7v8r\") pod \"kube-proxy-j6kkv\" (UID: \"b0f03ee5-5075-4923-ad6a-d7f5b871b28f\") " pod="kube-system/kube-proxy-j6kkv" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966444 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-run\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966456 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a37a289-c013-4e1f-9200-a460b34b5201-clustermesh-secrets\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966484 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-kernel\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966504 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cni-path\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966831 kubelet[1899]: I0913 00:54:12.966518 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-hubble-tls\") pod \"cilium-jgkfs\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " pod="kube-system/cilium-jgkfs" Sep 13 00:54:12.966969 kubelet[1899]: I0913 00:54:12.966588 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f71b6e1b-4d67-4663-8a18-411034e5bb47-cilium-config-path\") pod \"cilium-operator-5d85765b45-hl8jp\" (UID: \"f71b6e1b-4d67-4663-8a18-411034e5bb47\") " pod="kube-system/cilium-operator-5d85765b45-hl8jp" Sep 13 00:54:13.069539 kubelet[1899]: I0913 00:54:13.069502 1899 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:54:13.171847 kubelet[1899]: E0913 00:54:13.171687 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.172401 env[1206]: time="2025-09-13T00:54:13.172354670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j6kkv,Uid:b0f03ee5-5075-4923-ad6a-d7f5b871b28f,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:13.178093 kubelet[1899]: E0913 00:54:13.178061 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.179542 env[1206]: time="2025-09-13T00:54:13.179503042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkfs,Uid:7a37a289-c013-4e1f-9200-a460b34b5201,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:13.194620 env[1206]: time="2025-09-13T00:54:13.194541221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:13.194757 env[1206]: time="2025-09-13T00:54:13.194605160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:13.194757 env[1206]: time="2025-09-13T00:54:13.194637655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:13.195635 env[1206]: time="2025-09-13T00:54:13.194800475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/703b95e4075b940cbe008f985b794f2cac1b6927792a785c25d5a7b58d3e682e pid=1991 runtime=io.containerd.runc.v2 Sep 13 00:54:13.202948 env[1206]: time="2025-09-13T00:54:13.202848550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:13.202948 env[1206]: time="2025-09-13T00:54:13.202933242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:13.202948 env[1206]: time="2025-09-13T00:54:13.202956003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:13.203300 env[1206]: time="2025-09-13T00:54:13.203105597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1 pid=2009 runtime=io.containerd.runc.v2 Sep 13 00:54:13.203707 kubelet[1899]: E0913 00:54:13.203670 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.204619 env[1206]: time="2025-09-13T00:54:13.204574451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hl8jp,Uid:f71b6e1b-4d67-4663-8a18-411034e5bb47,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:13.224125 systemd[1]: Started cri-containerd-2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1.scope. Sep 13 00:54:13.400033 systemd[1]: Started cri-containerd-703b95e4075b940cbe008f985b794f2cac1b6927792a785c25d5a7b58d3e682e.scope. Sep 13 00:54:13.423177 env[1206]: time="2025-09-13T00:54:13.423054000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkfs,Uid:7a37a289-c013-4e1f-9200-a460b34b5201,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\"" Sep 13 00:54:13.423740 kubelet[1899]: E0913 00:54:13.423717 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.424999 env[1206]: time="2025-09-13T00:54:13.424965131Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:54:13.431851 env[1206]: time="2025-09-13T00:54:13.431819197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j6kkv,Uid:b0f03ee5-5075-4923-ad6a-d7f5b871b28f,Namespace:kube-system,Attempt:0,} returns sandbox id \"703b95e4075b940cbe008f985b794f2cac1b6927792a785c25d5a7b58d3e682e\"" Sep 13 00:54:13.432546 kubelet[1899]: E0913 00:54:13.432508 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.435730 env[1206]: time="2025-09-13T00:54:13.435673963Z" level=info msg="CreateContainer within sandbox \"703b95e4075b940cbe008f985b794f2cac1b6927792a785c25d5a7b58d3e682e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:54:13.439859 env[1206]: time="2025-09-13T00:54:13.439792072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:13.439859 env[1206]: time="2025-09-13T00:54:13.439843287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:13.439943 env[1206]: time="2025-09-13T00:54:13.439856433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:13.440252 env[1206]: time="2025-09-13T00:54:13.440036678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110 pid=2073 runtime=io.containerd.runc.v2 Sep 13 00:54:13.450216 systemd[1]: Started cri-containerd-d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110.scope. Sep 13 00:54:13.452341 env[1206]: time="2025-09-13T00:54:13.450668898Z" level=info msg="CreateContainer within sandbox \"703b95e4075b940cbe008f985b794f2cac1b6927792a785c25d5a7b58d3e682e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b4b4dc20c4d6525f3ed412169dbceb72eb9398d6ca63ff1106e376420743929\"" Sep 13 00:54:13.452692 env[1206]: time="2025-09-13T00:54:13.452660439Z" level=info msg="StartContainer for \"6b4b4dc20c4d6525f3ed412169dbceb72eb9398d6ca63ff1106e376420743929\"" Sep 13 00:54:13.468697 systemd[1]: Started cri-containerd-6b4b4dc20c4d6525f3ed412169dbceb72eb9398d6ca63ff1106e376420743929.scope. Sep 13 00:54:13.489060 env[1206]: time="2025-09-13T00:54:13.489011435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hl8jp,Uid:f71b6e1b-4d67-4663-8a18-411034e5bb47,Namespace:kube-system,Attempt:0,} returns sandbox id \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\"" Sep 13 00:54:13.490221 kubelet[1899]: E0913 00:54:13.489798 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.610362 env[1206]: time="2025-09-13T00:54:13.610286103Z" level=info msg="StartContainer for \"6b4b4dc20c4d6525f3ed412169dbceb72eb9398d6ca63ff1106e376420743929\" returns successfully" Sep 13 00:54:13.752392 kubelet[1899]: E0913 00:54:13.751456 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:13.764769 kubelet[1899]: I0913 00:54:13.764715 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j6kkv" podStartSLOduration=1.764698594 podStartE2EDuration="1.764698594s" podCreationTimestamp="2025-09-13 00:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:13.764339206 +0000 UTC m=+8.112844235" watchObservedRunningTime="2025-09-13 00:54:13.764698594 +0000 UTC m=+8.113203613" Sep 13 00:54:14.270583 kubelet[1899]: E0913 00:54:14.270533 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:14.754705 kubelet[1899]: E0913 00:54:14.754662 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:16.076536 kubelet[1899]: E0913 00:54:16.076504 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:16.757681 kubelet[1899]: E0913 00:54:16.757635 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:18.415358 kubelet[1899]: E0913 00:54:18.415300 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:20.723020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615373815.mount: Deactivated successfully. Sep 13 00:54:23.024765 update_engine[1195]: I0913 00:54:23.024657 1195 update_attempter.cc:509] Updating boot flags... Sep 13 00:54:27.025491 env[1206]: time="2025-09-13T00:54:27.025426068Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.027556 env[1206]: time="2025-09-13T00:54:27.027518482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.029069 env[1206]: time="2025-09-13T00:54:27.029019220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.029563 env[1206]: time="2025-09-13T00:54:27.029522048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:54:27.034672 env[1206]: time="2025-09-13T00:54:27.034630990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:54:27.038892 env[1206]: time="2025-09-13T00:54:27.038861262Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:54:27.053259 env[1206]: time="2025-09-13T00:54:27.053218605Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\"" Sep 13 00:54:27.054274 env[1206]: time="2025-09-13T00:54:27.053542573Z" level=info msg="StartContainer for \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\"" Sep 13 00:54:27.068830 systemd[1]: Started cri-containerd-cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e.scope. Sep 13 00:54:27.092132 env[1206]: time="2025-09-13T00:54:27.092070898Z" level=info msg="StartContainer for \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\" returns successfully" Sep 13 00:54:27.100617 systemd[1]: cri-containerd-cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e.scope: Deactivated successfully. Sep 13 00:54:27.774487 kubelet[1899]: E0913 00:54:27.774453 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:28.051188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e-rootfs.mount: Deactivated successfully. Sep 13 00:54:28.427194 env[1206]: time="2025-09-13T00:54:28.427137016Z" level=info msg="shim disconnected" id=cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e Sep 13 00:54:28.427194 env[1206]: time="2025-09-13T00:54:28.427187529Z" level=warning msg="cleaning up after shim disconnected" id=cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e namespace=k8s.io Sep 13 00:54:28.427194 env[1206]: time="2025-09-13T00:54:28.427216684Z" level=info msg="cleaning up dead shim" Sep 13 00:54:28.434076 env[1206]: time="2025-09-13T00:54:28.434027301Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2343 runtime=io.containerd.runc.v2\n" Sep 13 00:54:28.776722 kubelet[1899]: E0913 00:54:28.776592 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:28.778403 env[1206]: time="2025-09-13T00:54:28.778350790Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:54:28.793223 env[1206]: time="2025-09-13T00:54:28.793148372Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\"" Sep 13 00:54:28.793683 env[1206]: time="2025-09-13T00:54:28.793639526Z" level=info msg="StartContainer for \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\"" Sep 13 00:54:28.809237 systemd[1]: Started cri-containerd-fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d.scope. Sep 13 00:54:28.851062 env[1206]: time="2025-09-13T00:54:28.850970077Z" level=info msg="StartContainer for \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\" returns successfully" Sep 13 00:54:28.856704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:54:28.856915 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:54:28.857103 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:54:28.858870 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:54:28.863378 systemd[1]: cri-containerd-fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d.scope: Deactivated successfully. Sep 13 00:54:28.874671 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:54:28.893868 env[1206]: time="2025-09-13T00:54:28.893798071Z" level=info msg="shim disconnected" id=fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d Sep 13 00:54:28.893868 env[1206]: time="2025-09-13T00:54:28.893848774Z" level=warning msg="cleaning up after shim disconnected" id=fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d namespace=k8s.io Sep 13 00:54:28.893868 env[1206]: time="2025-09-13T00:54:28.893858485Z" level=info msg="cleaning up dead shim" Sep 13 00:54:28.901703 env[1206]: time="2025-09-13T00:54:28.901600767Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\n" Sep 13 00:54:29.050727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d-rootfs.mount: Deactivated successfully. Sep 13 00:54:29.282502 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:37870.service. Sep 13 00:54:29.321701 sshd[2421]: Accepted publickey for core from 10.0.0.1 port 37870 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:29.323356 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:29.327096 systemd-logind[1191]: New session 6 of user core. Sep 13 00:54:29.331217 systemd[1]: Started session-6.scope. Sep 13 00:54:29.335947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543796454.mount: Deactivated successfully. Sep 13 00:54:29.448476 sshd[2421]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:29.450607 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:37870.service: Deactivated successfully. Sep 13 00:54:29.451433 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:54:29.452316 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:54:29.453071 systemd-logind[1191]: Removed session 6. Sep 13 00:54:29.779023 kubelet[1899]: E0913 00:54:29.778987 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:29.780939 env[1206]: time="2025-09-13T00:54:29.780886875Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:54:29.808674 env[1206]: time="2025-09-13T00:54:29.808608061Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\"" Sep 13 00:54:29.809216 env[1206]: time="2025-09-13T00:54:29.809173332Z" level=info msg="StartContainer for \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\"" Sep 13 00:54:29.825373 systemd[1]: Started cri-containerd-d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93.scope. Sep 13 00:54:29.852456 systemd[1]: cri-containerd-d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93.scope: Deactivated successfully. Sep 13 00:54:29.853483 env[1206]: time="2025-09-13T00:54:29.853433329Z" level=info msg="StartContainer for \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\" returns successfully" Sep 13 00:54:29.905076 env[1206]: time="2025-09-13T00:54:29.905024739Z" level=info msg="shim disconnected" id=d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93 Sep 13 00:54:29.905076 env[1206]: time="2025-09-13T00:54:29.905070239Z" level=warning msg="cleaning up after shim disconnected" id=d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93 namespace=k8s.io Sep 13 00:54:29.905076 env[1206]: time="2025-09-13T00:54:29.905081634Z" level=info msg="cleaning up dead shim" Sep 13 00:54:29.912649 env[1206]: time="2025-09-13T00:54:29.912621597Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2482 runtime=io.containerd.runc.v2\n" Sep 13 00:54:30.369468 env[1206]: time="2025-09-13T00:54:30.369408417Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.371336 env[1206]: time="2025-09-13T00:54:30.371305654Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.372987 env[1206]: time="2025-09-13T00:54:30.372931444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.373434 env[1206]: time="2025-09-13T00:54:30.373402990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:54:30.375391 env[1206]: time="2025-09-13T00:54:30.375361863Z" level=info msg="CreateContainer within sandbox \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:54:30.387458 env[1206]: time="2025-09-13T00:54:30.387414997Z" level=info msg="CreateContainer within sandbox \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\"" Sep 13 00:54:30.387887 env[1206]: time="2025-09-13T00:54:30.387834579Z" level=info msg="StartContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\"" Sep 13 00:54:30.403137 systemd[1]: Started cri-containerd-9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2.scope. Sep 13 00:54:30.426849 env[1206]: time="2025-09-13T00:54:30.426786546Z" level=info msg="StartContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" returns successfully" Sep 13 00:54:30.781662 kubelet[1899]: E0913 00:54:30.781511 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:30.783760 kubelet[1899]: E0913 00:54:30.783730 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:30.785045 env[1206]: time="2025-09-13T00:54:30.784999251Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:54:30.835846 env[1206]: time="2025-09-13T00:54:30.835778415Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\"" Sep 13 00:54:30.840056 env[1206]: time="2025-09-13T00:54:30.840002674Z" level=info msg="StartContainer for \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\"" Sep 13 00:54:30.854135 kubelet[1899]: I0913 00:54:30.854062 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hl8jp" podStartSLOduration=1.9706341059999999 podStartE2EDuration="18.854034534s" podCreationTimestamp="2025-09-13 00:54:12 +0000 UTC" firstStartedPulling="2025-09-13 00:54:13.490758964 +0000 UTC m=+7.839263993" lastFinishedPulling="2025-09-13 00:54:30.374159392 +0000 UTC m=+24.722664421" observedRunningTime="2025-09-13 00:54:30.832929067 +0000 UTC m=+25.181434096" watchObservedRunningTime="2025-09-13 00:54:30.854034534 +0000 UTC m=+25.202539563" Sep 13 00:54:30.864736 systemd[1]: Started cri-containerd-1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19.scope. Sep 13 00:54:30.892736 systemd[1]: cri-containerd-1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19.scope: Deactivated successfully. Sep 13 00:54:30.954214 env[1206]: time="2025-09-13T00:54:30.954145956Z" level=info msg="StartContainer for \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\" returns successfully" Sep 13 00:54:31.051390 systemd[1]: run-containerd-runc-k8s.io-9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2-runc.iohx2a.mount: Deactivated successfully. Sep 13 00:54:31.165627 env[1206]: time="2025-09-13T00:54:31.165576158Z" level=info msg="shim disconnected" id=1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19 Sep 13 00:54:31.165880 env[1206]: time="2025-09-13T00:54:31.165835653Z" level=warning msg="cleaning up after shim disconnected" id=1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19 namespace=k8s.io Sep 13 00:54:31.165880 env[1206]: time="2025-09-13T00:54:31.165857361Z" level=info msg="cleaning up dead shim" Sep 13 00:54:31.176945 env[1206]: time="2025-09-13T00:54:31.176867582Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:54:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2571 runtime=io.containerd.runc.v2\n" Sep 13 00:54:31.787525 kubelet[1899]: E0913 00:54:31.787492 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:31.788021 kubelet[1899]: E0913 00:54:31.787595 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:31.789385 env[1206]: time="2025-09-13T00:54:31.789341597Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:54:31.808268 env[1206]: time="2025-09-13T00:54:31.807303451Z" level=info msg="CreateContainer within sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\"" Sep 13 00:54:31.808747 env[1206]: time="2025-09-13T00:54:31.808720114Z" level=info msg="StartContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\"" Sep 13 00:54:31.828452 systemd[1]: Started cri-containerd-c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a.scope. Sep 13 00:54:31.873118 env[1206]: time="2025-09-13T00:54:31.873056007Z" level=info msg="StartContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" returns successfully" Sep 13 00:54:32.041485 kubelet[1899]: I0913 00:54:32.041346 1899 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:54:32.068778 systemd[1]: Created slice kubepods-burstable-podc130b397_2abe_42b4_9ca4_0a47a7ce1204.slice. Sep 13 00:54:32.071842 kubelet[1899]: W0913 00:54:32.071801 1899 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 13 00:54:32.078695 kubelet[1899]: E0913 00:54:32.078625 1899 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:54:32.082003 systemd[1]: Created slice kubepods-burstable-poda95edf84_ec39_4360_9672_429b0e5ccad4.slice. Sep 13 00:54:32.095976 kubelet[1899]: I0913 00:54:32.095930 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95edf84-ec39-4360-9672-429b0e5ccad4-config-volume\") pod \"coredns-7c65d6cfc9-smsj5\" (UID: \"a95edf84-ec39-4360-9672-429b0e5ccad4\") " pod="kube-system/coredns-7c65d6cfc9-smsj5" Sep 13 00:54:32.096353 kubelet[1899]: I0913 00:54:32.096324 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4xk\" (UniqueName: \"kubernetes.io/projected/a95edf84-ec39-4360-9672-429b0e5ccad4-kube-api-access-gx4xk\") pod \"coredns-7c65d6cfc9-smsj5\" (UID: \"a95edf84-ec39-4360-9672-429b0e5ccad4\") " pod="kube-system/coredns-7c65d6cfc9-smsj5" Sep 13 00:54:32.096478 kubelet[1899]: I0913 00:54:32.096454 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghmw\" (UniqueName: \"kubernetes.io/projected/c130b397-2abe-42b4-9ca4-0a47a7ce1204-kube-api-access-lghmw\") pod \"coredns-7c65d6cfc9-jnnq5\" (UID: \"c130b397-2abe-42b4-9ca4-0a47a7ce1204\") " pod="kube-system/coredns-7c65d6cfc9-jnnq5" Sep 13 00:54:32.096598 kubelet[1899]: I0913 00:54:32.096576 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c130b397-2abe-42b4-9ca4-0a47a7ce1204-config-volume\") pod \"coredns-7c65d6cfc9-jnnq5\" (UID: \"c130b397-2abe-42b4-9ca4-0a47a7ce1204\") " pod="kube-system/coredns-7c65d6cfc9-jnnq5" Sep 13 00:54:32.793969 kubelet[1899]: E0913 00:54:32.793940 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:32.958773 kubelet[1899]: I0913 00:54:32.958707 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jgkfs" podStartSLOduration=7.352051211 podStartE2EDuration="20.958690054s" podCreationTimestamp="2025-09-13 00:54:12 +0000 UTC" firstStartedPulling="2025-09-13 00:54:13.424546216 +0000 UTC m=+7.773051235" lastFinishedPulling="2025-09-13 00:54:27.031185049 +0000 UTC m=+21.379690078" observedRunningTime="2025-09-13 00:54:32.958610161 +0000 UTC m=+27.307115190" watchObservedRunningTime="2025-09-13 00:54:32.958690054 +0000 UTC m=+27.307195073" Sep 13 00:54:33.197956 kubelet[1899]: E0913 00:54:33.197902 1899 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:33.198163 kubelet[1899]: E0913 00:54:33.198022 1899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a95edf84-ec39-4360-9672-429b0e5ccad4-config-volume podName:a95edf84-ec39-4360-9672-429b0e5ccad4 nodeName:}" failed. No retries permitted until 2025-09-13 00:54:33.69799544 +0000 UTC m=+28.046500469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a95edf84-ec39-4360-9672-429b0e5ccad4-config-volume") pod "coredns-7c65d6cfc9-smsj5" (UID: "a95edf84-ec39-4360-9672-429b0e5ccad4") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:33.198163 kubelet[1899]: E0913 00:54:33.197911 1899 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:33.198163 kubelet[1899]: E0913 00:54:33.198087 1899 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c130b397-2abe-42b4-9ca4-0a47a7ce1204-config-volume podName:c130b397-2abe-42b4-9ca4-0a47a7ce1204 nodeName:}" failed. No retries permitted until 2025-09-13 00:54:33.698070271 +0000 UTC m=+28.046575310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c130b397-2abe-42b4-9ca4-0a47a7ce1204-config-volume") pod "coredns-7c65d6cfc9-jnnq5" (UID: "c130b397-2abe-42b4-9ca4-0a47a7ce1204") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:54:33.795361 kubelet[1899]: E0913 00:54:33.795330 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:33.874516 kubelet[1899]: E0913 00:54:33.874451 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:33.875220 env[1206]: time="2025-09-13T00:54:33.875140884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jnnq5,Uid:c130b397-2abe-42b4-9ca4-0a47a7ce1204,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:33.886379 kubelet[1899]: E0913 00:54:33.886336 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:33.886708 env[1206]: time="2025-09-13T00:54:33.886661140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-smsj5,Uid:a95edf84-ec39-4360-9672-429b0e5ccad4,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:34.067114 systemd-networkd[1022]: cilium_host: Link UP Sep 13 00:54:34.067458 systemd-networkd[1022]: cilium_net: Link UP Sep 13 00:54:34.070047 systemd-networkd[1022]: cilium_net: Gained carrier Sep 13 00:54:34.071567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:54:34.071633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:54:34.071847 systemd-networkd[1022]: cilium_host: Gained carrier Sep 13 00:54:34.072085 systemd-networkd[1022]: cilium_host: Gained IPv6LL Sep 13 00:54:34.148275 systemd-networkd[1022]: cilium_vxlan: Link UP Sep 13 00:54:34.148284 systemd-networkd[1022]: cilium_vxlan: Gained carrier Sep 13 00:54:34.339242 kernel: NET: Registered PF_ALG protocol family Sep 13 00:54:34.454542 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:38734.service. Sep 13 00:54:34.492894 sshd[2863]: Accepted publickey for core from 10.0.0.1 port 38734 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:34.494081 sshd[2863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:34.497860 systemd-logind[1191]: New session 7 of user core. Sep 13 00:54:34.498752 systemd[1]: Started session-7.scope. Sep 13 00:54:34.610157 sshd[2863]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:34.612747 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:38734.service: Deactivated successfully. Sep 13 00:54:34.613482 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:54:34.614107 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:54:34.614914 systemd-logind[1191]: Removed session 7. Sep 13 00:54:34.802873 kubelet[1899]: E0913 00:54:34.802747 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:34.821295 systemd-networkd[1022]: cilium_net: Gained IPv6LL Sep 13 00:54:34.884703 systemd-networkd[1022]: lxc_health: Link UP Sep 13 00:54:34.894244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:54:34.894400 systemd-networkd[1022]: lxc_health: Gained carrier Sep 13 00:54:35.011248 systemd-networkd[1022]: lxcd4c4bbc8311b: Link UP Sep 13 00:54:35.019224 kernel: eth0: renamed from tmpd4d7d Sep 13 00:54:35.029212 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd4c4bbc8311b: link becomes ready Sep 13 00:54:35.029619 systemd-networkd[1022]: lxcd4c4bbc8311b: Gained carrier Sep 13 00:54:35.045796 systemd-networkd[1022]: lxc9dc6de35acc9: Link UP Sep 13 00:54:35.056221 kernel: eth0: renamed from tmp6828b Sep 13 00:54:35.064920 systemd-networkd[1022]: lxc9dc6de35acc9: Gained carrier Sep 13 00:54:35.065244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9dc6de35acc9: link becomes ready Sep 13 00:54:35.800900 kubelet[1899]: E0913 00:54:35.800864 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:36.238399 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Sep 13 00:54:36.356352 systemd-networkd[1022]: lxcd4c4bbc8311b: Gained IPv6LL Sep 13 00:54:36.484402 systemd-networkd[1022]: lxc_health: Gained IPv6LL Sep 13 00:54:36.484678 systemd-networkd[1022]: lxc9dc6de35acc9: Gained IPv6LL Sep 13 00:54:38.228032 env[1206]: time="2025-09-13T00:54:38.227858528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:38.228032 env[1206]: time="2025-09-13T00:54:38.227898664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:38.228032 env[1206]: time="2025-09-13T00:54:38.227908043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:38.228830 env[1206]: time="2025-09-13T00:54:38.228068271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6828b4fc804b0e9f8a725740d7505d4ecc1bd96dea33ad13c962a275dfc765a6 pid=3161 runtime=io.containerd.runc.v2 Sep 13 00:54:38.234696 env[1206]: time="2025-09-13T00:54:38.234567216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:38.234859 env[1206]: time="2025-09-13T00:54:38.234703143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:38.234859 env[1206]: time="2025-09-13T00:54:38.234734629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:38.235104 env[1206]: time="2025-09-13T00:54:38.235034010Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07 pid=3170 runtime=io.containerd.runc.v2 Sep 13 00:54:38.242746 systemd[1]: Started cri-containerd-6828b4fc804b0e9f8a725740d7505d4ecc1bd96dea33ad13c962a275dfc765a6.scope. Sep 13 00:54:38.256307 systemd[1]: run-containerd-runc-k8s.io-d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07-runc.vgXMDb.mount: Deactivated successfully. Sep 13 00:54:38.259834 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:38.259926 systemd[1]: Started cri-containerd-d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07.scope. Sep 13 00:54:38.272974 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:54:38.290266 env[1206]: time="2025-09-13T00:54:38.290227271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-smsj5,Uid:a95edf84-ec39-4360-9672-429b0e5ccad4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6828b4fc804b0e9f8a725740d7505d4ecc1bd96dea33ad13c962a275dfc765a6\"" Sep 13 00:54:38.291044 kubelet[1899]: E0913 00:54:38.291024 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.292908 env[1206]: time="2025-09-13T00:54:38.292885804Z" level=info msg="CreateContainer within sandbox \"6828b4fc804b0e9f8a725740d7505d4ecc1bd96dea33ad13c962a275dfc765a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:38.299824 env[1206]: time="2025-09-13T00:54:38.299797880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jnnq5,Uid:c130b397-2abe-42b4-9ca4-0a47a7ce1204,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07\"" Sep 13 00:54:38.300949 kubelet[1899]: E0913 00:54:38.300924 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.305002 env[1206]: time="2025-09-13T00:54:38.304945108Z" level=info msg="CreateContainer within sandbox \"d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:54:38.316772 env[1206]: time="2025-09-13T00:54:38.316727949Z" level=info msg="CreateContainer within sandbox \"6828b4fc804b0e9f8a725740d7505d4ecc1bd96dea33ad13c962a275dfc765a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23f944e92b5f21bfa4b6840093b32afa4cc5a095c2d30b2d14466799dc27fd56\"" Sep 13 00:54:38.317257 env[1206]: time="2025-09-13T00:54:38.317222070Z" level=info msg="StartContainer for \"23f944e92b5f21bfa4b6840093b32afa4cc5a095c2d30b2d14466799dc27fd56\"" Sep 13 00:54:38.324116 env[1206]: time="2025-09-13T00:54:38.324065131Z" level=info msg="CreateContainer within sandbox \"d4d7dc481e12f274aa0c8c35dd10231e3d6b7d2d716da9fbac9b669b41b61a07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c28278d583cede5a525ce51aabd896dc1910189dccf17a34458154950a7f3a5\"" Sep 13 00:54:38.324565 env[1206]: time="2025-09-13T00:54:38.324525070Z" level=info msg="StartContainer for \"0c28278d583cede5a525ce51aabd896dc1910189dccf17a34458154950a7f3a5\"" Sep 13 00:54:38.335590 systemd[1]: Started cri-containerd-23f944e92b5f21bfa4b6840093b32afa4cc5a095c2d30b2d14466799dc27fd56.scope. Sep 13 00:54:38.344065 systemd[1]: Started cri-containerd-0c28278d583cede5a525ce51aabd896dc1910189dccf17a34458154950a7f3a5.scope. Sep 13 00:54:38.364632 env[1206]: time="2025-09-13T00:54:38.364379918Z" level=info msg="StartContainer for \"23f944e92b5f21bfa4b6840093b32afa4cc5a095c2d30b2d14466799dc27fd56\" returns successfully" Sep 13 00:54:38.371232 env[1206]: time="2025-09-13T00:54:38.371174476Z" level=info msg="StartContainer for \"0c28278d583cede5a525ce51aabd896dc1910189dccf17a34458154950a7f3a5\" returns successfully" Sep 13 00:54:38.807771 kubelet[1899]: E0913 00:54:38.807541 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.808794 kubelet[1899]: E0913 00:54:38.808368 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:38.816885 kubelet[1899]: I0913 00:54:38.816832 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-smsj5" podStartSLOduration=26.816815091 podStartE2EDuration="26.816815091s" podCreationTimestamp="2025-09-13 00:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:38.816442255 +0000 UTC m=+33.164947274" watchObservedRunningTime="2025-09-13 00:54:38.816815091 +0000 UTC m=+33.165320110" Sep 13 00:54:38.823775 kubelet[1899]: I0913 00:54:38.823720 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jnnq5" podStartSLOduration=26.823700471 podStartE2EDuration="26.823700471s" podCreationTimestamp="2025-09-13 00:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:38.823538569 +0000 UTC m=+33.172043598" watchObservedRunningTime="2025-09-13 00:54:38.823700471 +0000 UTC m=+33.172205500" Sep 13 00:54:39.615347 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:38742.service. Sep 13 00:54:39.652822 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 38742 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:39.654004 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:39.657028 systemd-logind[1191]: New session 8 of user core. Sep 13 00:54:39.657771 systemd[1]: Started session-8.scope. Sep 13 00:54:39.810478 kubelet[1899]: E0913 00:54:39.810436 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:39.810853 kubelet[1899]: E0913 00:54:39.810585 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:39.817212 sshd[3320]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:39.819166 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:38742.service: Deactivated successfully. Sep 13 00:54:39.819875 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:54:39.820542 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:54:39.821273 systemd-logind[1191]: Removed session 8. Sep 13 00:54:40.500333 kubelet[1899]: I0913 00:54:40.500274 1899 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:54:40.500736 kubelet[1899]: E0913 00:54:40.500695 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:40.811515 kubelet[1899]: E0913 00:54:40.811408 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:40.812001 kubelet[1899]: E0913 00:54:40.811972 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:44.822547 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:52816.service. Sep 13 00:54:44.859236 sshd[3338]: Accepted publickey for core from 10.0.0.1 port 52816 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:44.860669 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:44.864820 systemd-logind[1191]: New session 9 of user core. Sep 13 00:54:44.865793 systemd[1]: Started session-9.scope. Sep 13 00:54:44.979478 sshd[3338]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:44.982217 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:52816.service: Deactivated successfully. Sep 13 00:54:44.983034 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:54:44.983634 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:54:44.984318 systemd-logind[1191]: Removed session 9. Sep 13 00:54:49.986685 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:55300.service. Sep 13 00:54:50.030369 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:50.031471 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:50.035221 systemd-logind[1191]: New session 10 of user core. Sep 13 00:54:50.036170 systemd[1]: Started session-10.scope. Sep 13 00:54:50.152848 sshd[3352]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:50.156395 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:55300.service: Deactivated successfully. Sep 13 00:54:50.157075 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:54:50.157788 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:54:50.159129 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:55310.service. Sep 13 00:54:50.159901 systemd-logind[1191]: Removed session 10. Sep 13 00:54:50.197653 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:50.199159 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:50.203030 systemd-logind[1191]: New session 11 of user core. Sep 13 00:54:50.204085 systemd[1]: Started session-11.scope. Sep 13 00:54:50.356102 sshd[3366]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:50.360026 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:55318.service. Sep 13 00:54:50.363384 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:55310.service: Deactivated successfully. Sep 13 00:54:50.364448 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:54:50.369603 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:54:50.370870 systemd-logind[1191]: Removed session 11. Sep 13 00:54:50.416685 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 55318 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:50.417856 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:50.421653 systemd-logind[1191]: New session 12 of user core. Sep 13 00:54:50.422443 systemd[1]: Started session-12.scope. Sep 13 00:54:50.526313 sshd[3377]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:50.528425 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:55318.service: Deactivated successfully. Sep 13 00:54:50.529080 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:54:50.529820 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:54:50.530436 systemd-logind[1191]: Removed session 12. Sep 13 00:54:55.532069 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:55320.service. Sep 13 00:54:55.569646 sshd[3391]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:55.570952 sshd[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:55.574373 systemd-logind[1191]: New session 13 of user core. Sep 13 00:54:55.575069 systemd[1]: Started session-13.scope. Sep 13 00:54:55.676824 sshd[3391]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:55.679526 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:55320.service: Deactivated successfully. Sep 13 00:54:55.680370 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:54:55.681119 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:54:55.681818 systemd-logind[1191]: Removed session 13. Sep 13 00:55:00.682120 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:43848.service. Sep 13 00:55:00.719302 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 43848 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:00.720431 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:00.723870 systemd-logind[1191]: New session 14 of user core. Sep 13 00:55:00.724874 systemd[1]: Started session-14.scope. Sep 13 00:55:00.826465 sshd[3406]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:00.829579 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:43848.service: Deactivated successfully. Sep 13 00:55:00.830105 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:55:00.830658 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:55:00.831686 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:43858.service. Sep 13 00:55:00.832517 systemd-logind[1191]: Removed session 14. Sep 13 00:55:00.868916 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 43858 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:00.870608 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:00.874252 systemd-logind[1191]: New session 15 of user core. Sep 13 00:55:00.875146 systemd[1]: Started session-15.scope. Sep 13 00:55:01.115004 sshd[3419]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:01.117890 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:43858.service: Deactivated successfully. Sep 13 00:55:01.118442 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:55:01.118968 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:55:01.120073 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:43874.service. Sep 13 00:55:01.120911 systemd-logind[1191]: Removed session 15. Sep 13 00:55:01.158096 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 43874 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:01.159258 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:01.162852 systemd-logind[1191]: New session 16 of user core. Sep 13 00:55:01.163716 systemd[1]: Started session-16.scope. Sep 13 00:55:02.527709 sshd[3430]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:02.531708 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:43882.service. Sep 13 00:55:02.533174 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:43874.service: Deactivated successfully. Sep 13 00:55:02.534025 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:55:02.534855 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:55:02.535851 systemd-logind[1191]: Removed session 16. Sep 13 00:55:02.572038 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 43882 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:02.573326 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:02.576832 systemd-logind[1191]: New session 17 of user core. Sep 13 00:55:02.577635 systemd[1]: Started session-17.scope. Sep 13 00:55:02.790072 sshd[3448]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:02.792644 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:43882.service: Deactivated successfully. Sep 13 00:55:02.793169 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:55:02.793924 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:55:02.794842 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:43884.service. Sep 13 00:55:02.795887 systemd-logind[1191]: Removed session 17. Sep 13 00:55:02.831741 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 43884 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:02.832857 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:02.836685 systemd-logind[1191]: New session 18 of user core. Sep 13 00:55:02.837510 systemd[1]: Started session-18.scope. Sep 13 00:55:02.943368 sshd[3463]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:02.945875 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:43884.service: Deactivated successfully. Sep 13 00:55:02.946640 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:55:02.947245 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:55:02.947992 systemd-logind[1191]: Removed session 18. Sep 13 00:55:07.949314 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:43886.service. Sep 13 00:55:07.984980 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 43886 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:07.986091 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:07.989438 systemd-logind[1191]: New session 19 of user core. Sep 13 00:55:07.990422 systemd[1]: Started session-19.scope. Sep 13 00:55:08.098084 sshd[3478]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:08.100689 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:43886.service: Deactivated successfully. Sep 13 00:55:08.101477 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:55:08.101950 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:55:08.102653 systemd-logind[1191]: Removed session 19. Sep 13 00:55:13.102615 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:51058.service. Sep 13 00:55:13.138418 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 51058 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:13.139504 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:13.142518 systemd-logind[1191]: New session 20 of user core. Sep 13 00:55:13.143301 systemd[1]: Started session-20.scope. Sep 13 00:55:13.270306 sshd[3495]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:13.272825 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:51058.service: Deactivated successfully. Sep 13 00:55:13.273548 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:55:13.274068 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:55:13.274736 systemd-logind[1191]: Removed session 20. Sep 13 00:55:18.276637 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:51062.service. Sep 13 00:55:18.313037 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 51062 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:18.314093 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:18.317238 systemd-logind[1191]: New session 21 of user core. Sep 13 00:55:18.318123 systemd[1]: Started session-21.scope. Sep 13 00:55:18.478607 sshd[3512]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:18.480934 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:51062.service: Deactivated successfully. Sep 13 00:55:18.481636 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:55:18.482056 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:55:18.482664 systemd-logind[1191]: Removed session 21. Sep 13 00:55:23.482382 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:55090.service. Sep 13 00:55:23.518028 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 55090 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:23.518964 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:23.522261 systemd-logind[1191]: New session 22 of user core. Sep 13 00:55:23.523287 systemd[1]: Started session-22.scope. Sep 13 00:55:23.619988 sshd[3525]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:23.622628 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:55090.service: Deactivated successfully. Sep 13 00:55:23.623159 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:55:23.623624 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:55:23.624516 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:55100.service. Sep 13 00:55:23.625468 systemd-logind[1191]: Removed session 22. Sep 13 00:55:23.660862 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 55100 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:23.661933 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:23.664880 systemd-logind[1191]: New session 23 of user core. Sep 13 00:55:23.665719 systemd[1]: Started session-23.scope. Sep 13 00:55:25.164884 env[1206]: time="2025-09-13T00:55:25.164738308Z" level=info msg="StopContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" with timeout 30 (s)" Sep 13 00:55:25.166175 env[1206]: time="2025-09-13T00:55:25.165479522Z" level=info msg="Stop container \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" with signal terminated" Sep 13 00:55:25.177670 systemd[1]: run-containerd-runc-k8s.io-c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a-runc.u9QPBL.mount: Deactivated successfully. Sep 13 00:55:25.178285 systemd[1]: cri-containerd-9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2.scope: Deactivated successfully. Sep 13 00:55:25.189061 env[1206]: time="2025-09-13T00:55:25.189004537Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:55:25.194076 env[1206]: time="2025-09-13T00:55:25.194047275Z" level=info msg="StopContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" with timeout 2 (s)" Sep 13 00:55:25.194424 env[1206]: time="2025-09-13T00:55:25.194383894Z" level=info msg="Stop container \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" with signal terminated" Sep 13 00:55:25.199319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2-rootfs.mount: Deactivated successfully. Sep 13 00:55:25.201043 systemd-networkd[1022]: lxc_health: Link DOWN Sep 13 00:55:25.201050 systemd-networkd[1022]: lxc_health: Lost carrier Sep 13 00:55:25.208832 env[1206]: time="2025-09-13T00:55:25.208771145Z" level=info msg="shim disconnected" id=9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2 Sep 13 00:55:25.208994 env[1206]: time="2025-09-13T00:55:25.208839181Z" level=warning msg="cleaning up after shim disconnected" id=9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2 namespace=k8s.io Sep 13 00:55:25.208994 env[1206]: time="2025-09-13T00:55:25.208851102Z" level=info msg="cleaning up dead shim" Sep 13 00:55:25.217460 env[1206]: time="2025-09-13T00:55:25.217407125Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3594 runtime=io.containerd.runc.v2\n" Sep 13 00:55:25.221589 env[1206]: time="2025-09-13T00:55:25.221550709Z" level=info msg="StopContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" returns successfully" Sep 13 00:55:25.222296 env[1206]: time="2025-09-13T00:55:25.222263901Z" level=info msg="StopPodSandbox for \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\"" Sep 13 00:55:25.222363 env[1206]: time="2025-09-13T00:55:25.222326567Z" level=info msg="Container to stop \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.224224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110-shm.mount: Deactivated successfully. Sep 13 00:55:25.235508 systemd[1]: cri-containerd-d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110.scope: Deactivated successfully. Sep 13 00:55:25.238762 systemd[1]: cri-containerd-c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a.scope: Deactivated successfully. Sep 13 00:55:25.239047 systemd[1]: cri-containerd-c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a.scope: Consumed 5.887s CPU time. Sep 13 00:55:25.258567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a-rootfs.mount: Deactivated successfully. Sep 13 00:55:25.264719 env[1206]: time="2025-09-13T00:55:25.264678702Z" level=info msg="shim disconnected" id=c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a Sep 13 00:55:25.265794 env[1206]: time="2025-09-13T00:55:25.265729665Z" level=warning msg="cleaning up after shim disconnected" id=c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a namespace=k8s.io Sep 13 00:55:25.265794 env[1206]: time="2025-09-13T00:55:25.265762646Z" level=info msg="cleaning up dead shim" Sep 13 00:55:25.266012 env[1206]: time="2025-09-13T00:55:25.264697476Z" level=info msg="shim disconnected" id=d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110 Sep 13 00:55:25.266012 env[1206]: time="2025-09-13T00:55:25.265836732Z" level=warning msg="cleaning up after shim disconnected" id=d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110 namespace=k8s.io Sep 13 00:55:25.266012 env[1206]: time="2025-09-13T00:55:25.265851570Z" level=info msg="cleaning up dead shim" Sep 13 00:55:25.271940 env[1206]: time="2025-09-13T00:55:25.271877607Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3642 runtime=io.containerd.runc.v2\n" Sep 13 00:55:25.272113 env[1206]: time="2025-09-13T00:55:25.272012695Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3643 runtime=io.containerd.runc.v2\n" Sep 13 00:55:25.272351 env[1206]: time="2025-09-13T00:55:25.272326543Z" level=info msg="TearDown network for sandbox \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\" successfully" Sep 13 00:55:25.272387 env[1206]: time="2025-09-13T00:55:25.272349365Z" level=info msg="StopPodSandbox for \"d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110\" returns successfully" Sep 13 00:55:25.274776 env[1206]: time="2025-09-13T00:55:25.274662311Z" level=info msg="StopContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" returns successfully" Sep 13 00:55:25.275047 env[1206]: time="2025-09-13T00:55:25.275016793Z" level=info msg="StopPodSandbox for \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\"" Sep 13 00:55:25.275098 env[1206]: time="2025-09-13T00:55:25.275074750Z" level=info msg="Container to stop \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.275098 env[1206]: time="2025-09-13T00:55:25.275087934Z" level=info msg="Container to stop \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.275152 env[1206]: time="2025-09-13T00:55:25.275098724Z" level=info msg="Container to stop \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.275152 env[1206]: time="2025-09-13T00:55:25.275108121Z" level=info msg="Container to stop \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.275152 env[1206]: time="2025-09-13T00:55:25.275118029Z" level=info msg="Container to stop \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:55:25.281798 systemd[1]: cri-containerd-2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1.scope: Deactivated successfully. Sep 13 00:55:25.304165 env[1206]: time="2025-09-13T00:55:25.304089885Z" level=info msg="shim disconnected" id=2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1 Sep 13 00:55:25.304165 env[1206]: time="2025-09-13T00:55:25.304165755Z" level=warning msg="cleaning up after shim disconnected" id=2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1 namespace=k8s.io Sep 13 00:55:25.304165 env[1206]: time="2025-09-13T00:55:25.304177176Z" level=info msg="cleaning up dead shim" Sep 13 00:55:25.308677 kubelet[1899]: I0913 00:55:25.308639 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbmqd\" (UniqueName: \"kubernetes.io/projected/f71b6e1b-4d67-4663-8a18-411034e5bb47-kube-api-access-lbmqd\") pod \"f71b6e1b-4d67-4663-8a18-411034e5bb47\" (UID: \"f71b6e1b-4d67-4663-8a18-411034e5bb47\") " Sep 13 00:55:25.309154 kubelet[1899]: I0913 00:55:25.308690 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f71b6e1b-4d67-4663-8a18-411034e5bb47-cilium-config-path\") pod \"f71b6e1b-4d67-4663-8a18-411034e5bb47\" (UID: \"f71b6e1b-4d67-4663-8a18-411034e5bb47\") " Sep 13 00:55:25.311250 kubelet[1899]: I0913 00:55:25.311221 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71b6e1b-4d67-4663-8a18-411034e5bb47-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f71b6e1b-4d67-4663-8a18-411034e5bb47" (UID: "f71b6e1b-4d67-4663-8a18-411034e5bb47"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:55:25.314940 kubelet[1899]: I0913 00:55:25.314856 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71b6e1b-4d67-4663-8a18-411034e5bb47-kube-api-access-lbmqd" (OuterVolumeSpecName: "kube-api-access-lbmqd") pod "f71b6e1b-4d67-4663-8a18-411034e5bb47" (UID: "f71b6e1b-4d67-4663-8a18-411034e5bb47"). InnerVolumeSpecName "kube-api-access-lbmqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:25.316405 env[1206]: time="2025-09-13T00:55:25.316363147Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3687 runtime=io.containerd.runc.v2\n" Sep 13 00:55:25.316725 env[1206]: time="2025-09-13T00:55:25.316687274Z" level=info msg="TearDown network for sandbox \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" successfully" Sep 13 00:55:25.316769 env[1206]: time="2025-09-13T00:55:25.316718992Z" level=info msg="StopPodSandbox for \"2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1\" returns successfully" Sep 13 00:55:25.409930 kubelet[1899]: I0913 00:55:25.409884 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-xtables-lock\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.409930 kubelet[1899]: I0913 00:55:25.409927 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-hostproc\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.409956 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-config-path\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.409976 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-run\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.409991 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-kernel\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.410007 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-lib-modules\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.410026 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-cgroup\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410125 kubelet[1899]: I0913 00:55:25.410049 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-bpf-maps\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410304 kubelet[1899]: I0913 00:55:25.410073 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a37a289-c013-4e1f-9200-a460b34b5201-clustermesh-secrets\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410304 kubelet[1899]: I0913 00:55:25.410015 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410304 kubelet[1899]: I0913 00:55:25.410093 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-hubble-tls\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410304 kubelet[1899]: I0913 00:55:25.410024 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410304 kubelet[1899]: I0913 00:55:25.410044 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410424 kubelet[1899]: I0913 00:55:25.410055 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410424 kubelet[1899]: I0913 00:55:25.410071 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410424 kubelet[1899]: I0913 00:55:25.410130 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410424 kubelet[1899]: I0913 00:55:25.410144 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410424 kubelet[1899]: I0913 00:55:25.410109 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-etc-cni-netd\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410154 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410181 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfrvj\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-kube-api-access-vfrvj\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410229 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-net\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410251 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cni-path\") pod \"7a37a289-c013-4e1f-9200-a460b34b5201\" (UID: \"7a37a289-c013-4e1f-9200-a460b34b5201\") " Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410292 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f71b6e1b-4d67-4663-8a18-411034e5bb47-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410306 1899 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410541 kubelet[1899]: I0913 00:55:25.410317 1899 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410330 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410340 1899 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410351 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410360 1899 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410369 1899 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410379 1899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbmqd\" (UniqueName: \"kubernetes.io/projected/f71b6e1b-4d67-4663-8a18-411034e5bb47-kube-api-access-lbmqd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410389 1899 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.410721 kubelet[1899]: I0913 00:55:25.410412 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.411014 kubelet[1899]: I0913 00:55:25.410984 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:25.411995 kubelet[1899]: I0913 00:55:25.411964 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:55:25.413348 kubelet[1899]: I0913 00:55:25.413313 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-kube-api-access-vfrvj" (OuterVolumeSpecName: "kube-api-access-vfrvj") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "kube-api-access-vfrvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:25.413348 kubelet[1899]: I0913 00:55:25.413341 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:25.413825 kubelet[1899]: I0913 00:55:25.413792 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a37a289-c013-4e1f-9200-a460b34b5201-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a37a289-c013-4e1f-9200-a460b34b5201" (UID: "7a37a289-c013-4e1f-9200-a460b34b5201"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511142 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a37a289-c013-4e1f-9200-a460b34b5201-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511170 1899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfrvj\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-kube-api-access-vfrvj\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511179 1899 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a37a289-c013-4e1f-9200-a460b34b5201-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511186 1899 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a37a289-c013-4e1f-9200-a460b34b5201-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511195 1899 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.511286 kubelet[1899]: I0913 00:55:25.511213 1899 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a37a289-c013-4e1f-9200-a460b34b5201-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:25.736520 systemd[1]: Removed slice kubepods-besteffort-podf71b6e1b_4d67_4663_8a18_411034e5bb47.slice. Sep 13 00:55:25.737680 systemd[1]: Removed slice kubepods-burstable-pod7a37a289_c013_4e1f_9200_a460b34b5201.slice. Sep 13 00:55:25.737763 systemd[1]: kubepods-burstable-pod7a37a289_c013_4e1f_9200_a460b34b5201.slice: Consumed 5.984s CPU time. Sep 13 00:55:25.768740 kubelet[1899]: E0913 00:55:25.768637 1899 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:55:25.887605 kubelet[1899]: I0913 00:55:25.887551 1899 scope.go:117] "RemoveContainer" containerID="9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2" Sep 13 00:55:25.888941 env[1206]: time="2025-09-13T00:55:25.888897091Z" level=info msg="RemoveContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\"" Sep 13 00:55:25.897398 env[1206]: time="2025-09-13T00:55:25.897327983Z" level=info msg="RemoveContainer for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" returns successfully" Sep 13 00:55:25.899513 kubelet[1899]: I0913 00:55:25.899479 1899 scope.go:117] "RemoveContainer" containerID="9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2" Sep 13 00:55:25.899889 env[1206]: time="2025-09-13T00:55:25.899823295Z" level=error msg="ContainerStatus for \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\": not found" Sep 13 00:55:25.901019 kubelet[1899]: E0913 00:55:25.900876 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\": not found" containerID="9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2" Sep 13 00:55:25.902173 kubelet[1899]: I0913 00:55:25.901577 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2"} err="failed to get container status \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9404a8d1b207f7d0d975f0d82bdb4dcc70a85af927f97cd52e8aa4564944d6c2\": not found" Sep 13 00:55:25.902260 kubelet[1899]: I0913 00:55:25.902177 1899 scope.go:117] "RemoveContainer" containerID="c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a" Sep 13 00:55:25.905253 env[1206]: time="2025-09-13T00:55:25.905148993Z" level=info msg="RemoveContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\"" Sep 13 00:55:25.908576 env[1206]: time="2025-09-13T00:55:25.908473369Z" level=info msg="RemoveContainer for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" returns successfully" Sep 13 00:55:25.908691 kubelet[1899]: I0913 00:55:25.908660 1899 scope.go:117] "RemoveContainer" containerID="1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19" Sep 13 00:55:25.910499 env[1206]: time="2025-09-13T00:55:25.910463782Z" level=info msg="RemoveContainer for \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\"" Sep 13 00:55:25.914294 env[1206]: time="2025-09-13T00:55:25.914254696Z" level=info msg="RemoveContainer for \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\" returns successfully" Sep 13 00:55:25.914673 kubelet[1899]: I0913 00:55:25.914644 1899 scope.go:117] "RemoveContainer" containerID="d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93" Sep 13 00:55:25.916476 env[1206]: time="2025-09-13T00:55:25.916419570Z" level=info msg="RemoveContainer for \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\"" Sep 13 00:55:25.919434 env[1206]: time="2025-09-13T00:55:25.919397810Z" level=info msg="RemoveContainer for \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\" returns successfully" Sep 13 00:55:25.919590 kubelet[1899]: I0913 00:55:25.919554 1899 scope.go:117] "RemoveContainer" containerID="fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d" Sep 13 00:55:25.920522 env[1206]: time="2025-09-13T00:55:25.920488025Z" level=info msg="RemoveContainer for \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\"" Sep 13 00:55:25.923049 env[1206]: time="2025-09-13T00:55:25.923023921Z" level=info msg="RemoveContainer for \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\" returns successfully" Sep 13 00:55:25.923159 kubelet[1899]: I0913 00:55:25.923134 1899 scope.go:117] "RemoveContainer" containerID="cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e" Sep 13 00:55:25.924162 env[1206]: time="2025-09-13T00:55:25.924132361Z" level=info msg="RemoveContainer for \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\"" Sep 13 00:55:25.926899 env[1206]: time="2025-09-13T00:55:25.926862193Z" level=info msg="RemoveContainer for \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\" returns successfully" Sep 13 00:55:25.927048 kubelet[1899]: I0913 00:55:25.927028 1899 scope.go:117] "RemoveContainer" containerID="c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a" Sep 13 00:55:25.927248 env[1206]: time="2025-09-13T00:55:25.927180088Z" level=error msg="ContainerStatus for \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\": not found" Sep 13 00:55:25.927384 kubelet[1899]: E0913 00:55:25.927349 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\": not found" containerID="c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a" Sep 13 00:55:25.927437 kubelet[1899]: I0913 00:55:25.927390 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a"} err="failed to get container status \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9a9773d0d1b5204e1a6ac3e521ca4c1d887cc005bdf61490e62d33a385dfa3a\": not found" Sep 13 00:55:25.927437 kubelet[1899]: I0913 00:55:25.927411 1899 scope.go:117] "RemoveContainer" containerID="1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19" Sep 13 00:55:25.927581 env[1206]: time="2025-09-13T00:55:25.927542064Z" level=error msg="ContainerStatus for \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\": not found" Sep 13 00:55:25.927671 kubelet[1899]: E0913 00:55:25.927648 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\": not found" containerID="1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19" Sep 13 00:55:25.927738 kubelet[1899]: I0913 00:55:25.927670 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19"} err="failed to get container status \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ddb575979df573a82891dafdb0f53b0f09c2e97f9991493e15570e90cbe4f19\": not found" Sep 13 00:55:25.927738 kubelet[1899]: I0913 00:55:25.927683 1899 scope.go:117] "RemoveContainer" containerID="d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93" Sep 13 00:55:25.927895 env[1206]: time="2025-09-13T00:55:25.927844391Z" level=error msg="ContainerStatus for \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\": not found" Sep 13 00:55:25.928009 kubelet[1899]: E0913 00:55:25.927984 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\": not found" containerID="d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93" Sep 13 00:55:25.928061 kubelet[1899]: I0913 00:55:25.928018 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93"} err="failed to get container status \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\": rpc error: code = NotFound desc = an error occurred when try to find container \"d39a283011a3ab4ad398e124e15731c02473be57ae4021be215d763327d6af93\": not found" Sep 13 00:55:25.928061 kubelet[1899]: I0913 00:55:25.928033 1899 scope.go:117] "RemoveContainer" containerID="fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d" Sep 13 00:55:25.928265 env[1206]: time="2025-09-13T00:55:25.928221475Z" level=error msg="ContainerStatus for \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\": not found" Sep 13 00:55:25.928365 kubelet[1899]: E0913 00:55:25.928345 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\": not found" containerID="fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d" Sep 13 00:55:25.928431 kubelet[1899]: I0913 00:55:25.928369 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d"} err="failed to get container status \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa5f62318b7749bb399374bb756ee2b9eff88f3f4d4cd44269c78705a866e17d\": not found" Sep 13 00:55:25.928431 kubelet[1899]: I0913 00:55:25.928390 1899 scope.go:117] "RemoveContainer" containerID="cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e" Sep 13 00:55:25.928572 env[1206]: time="2025-09-13T00:55:25.928529391Z" level=error msg="ContainerStatus for \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\": not found" Sep 13 00:55:25.928670 kubelet[1899]: E0913 00:55:25.928647 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\": not found" containerID="cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e" Sep 13 00:55:25.928739 kubelet[1899]: I0913 00:55:25.928670 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e"} err="failed to get container status \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd30604be2534f07ad91023c97dd12281b840a3d5914e6a1f59fe684c867440e\": not found" Sep 13 00:55:26.175172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d330975e440542028c1f3a967326c71c646c21a6b17b765563c88508ce7b6110-rootfs.mount: Deactivated successfully. Sep 13 00:55:26.175328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1-rootfs.mount: Deactivated successfully. Sep 13 00:55:26.175417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d00e41c04b125b4e60575319ff8c8b612009091963b16fb38ad247f2f22f6f1-shm.mount: Deactivated successfully. Sep 13 00:55:26.175497 systemd[1]: var-lib-kubelet-pods-f71b6e1b\x2d4d67\x2d4663\x2d8a18\x2d411034e5bb47-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlbmqd.mount: Deactivated successfully. Sep 13 00:55:26.175576 systemd[1]: var-lib-kubelet-pods-7a37a289\x2dc013\x2d4e1f\x2d9200\x2da460b34b5201-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfrvj.mount: Deactivated successfully. Sep 13 00:55:26.175659 systemd[1]: var-lib-kubelet-pods-7a37a289\x2dc013\x2d4e1f\x2d9200\x2da460b34b5201-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:55:26.175752 systemd[1]: var-lib-kubelet-pods-7a37a289\x2dc013\x2d4e1f\x2d9200\x2da460b34b5201-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:26.731440 kubelet[1899]: E0913 00:55:26.731383 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:27.049671 sshd[3539]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:27.053818 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:55100.service: Deactivated successfully. Sep 13 00:55:27.054371 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:55:27.054968 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:55:27.056113 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:55102.service. Sep 13 00:55:27.056720 systemd-logind[1191]: Removed session 23. Sep 13 00:55:27.094694 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 55102 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:27.095967 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:27.100158 systemd-logind[1191]: New session 24 of user core. Sep 13 00:55:27.100985 systemd[1]: Started session-24.scope. Sep 13 00:55:27.122027 kubelet[1899]: I0913 00:55:27.121952 1899 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:55:27Z","lastTransitionTime":"2025-09-13T00:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:55:27.714477 sshd[3708]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:27.717944 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:55106.service. Sep 13 00:55:27.718509 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:55102.service: Deactivated successfully. Sep 13 00:55:27.719069 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:55:27.719984 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:55:27.722221 systemd-logind[1191]: Removed session 24. Sep 13 00:55:27.734585 kubelet[1899]: I0913 00:55:27.734531 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" path="/var/lib/kubelet/pods/7a37a289-c013-4e1f-9200-a460b34b5201/volumes" Sep 13 00:55:27.735295 kubelet[1899]: I0913 00:55:27.735267 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71b6e1b-4d67-4663-8a18-411034e5bb47" path="/var/lib/kubelet/pods/f71b6e1b-4d67-4663-8a18-411034e5bb47/volumes" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750288 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="mount-bpf-fs" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750321 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="clean-cilium-state" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750327 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="cilium-agent" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750333 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="mount-cgroup" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750338 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="apply-sysctl-overwrites" Sep 13 00:55:27.750755 kubelet[1899]: E0913 00:55:27.750343 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f71b6e1b-4d67-4663-8a18-411034e5bb47" containerName="cilium-operator" Sep 13 00:55:27.750755 kubelet[1899]: I0913 00:55:27.750375 1899 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a37a289-c013-4e1f-9200-a460b34b5201" containerName="cilium-agent" Sep 13 00:55:27.750755 kubelet[1899]: I0913 00:55:27.750382 1899 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71b6e1b-4d67-4663-8a18-411034e5bb47" containerName="cilium-operator" Sep 13 00:55:27.755782 systemd[1]: Created slice kubepods-burstable-podba25f50a_90e6_4ccf_b1cc_7afb4cc047c3.slice. Sep 13 00:55:27.757013 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 55106 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:27.759467 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:27.764642 systemd[1]: Started session-25.scope. Sep 13 00:55:27.766084 systemd-logind[1191]: New session 25 of user core. Sep 13 00:55:27.827443 kubelet[1899]: I0913 00:55:27.827376 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-etc-cni-netd\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827443 kubelet[1899]: I0913 00:55:27.827429 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-clustermesh-secrets\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827443 kubelet[1899]: I0913 00:55:27.827443 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-ipsec-secrets\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827459 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-bpf-maps\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827473 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-xtables-lock\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827546 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hostproc\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827624 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-kernel\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827645 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hubble-tls\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827667 kubelet[1899]: I0913 00:55:27.827666 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-lib-modules\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827682 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-net\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827695 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cdn\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-kube-api-access-v2cdn\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827713 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-run\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827726 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cni-path\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827743 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-cgroup\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.827800 kubelet[1899]: I0913 00:55:27.827770 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-config-path\") pod \"cilium-kjq8d\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " pod="kube-system/cilium-kjq8d" Sep 13 00:55:27.882016 sshd[3719]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:27.885734 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:55108.service. Sep 13 00:55:27.886247 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:55106.service: Deactivated successfully. Sep 13 00:55:27.886878 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:55:27.887971 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:55:27.889033 systemd-logind[1191]: Removed session 25. Sep 13 00:55:27.893493 kubelet[1899]: E0913 00:55:27.893457 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-v2cdn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-kjq8d" podUID="ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" Sep 13 00:55:27.922752 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 55108 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:27.923919 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:27.926898 systemd-logind[1191]: New session 26 of user core. Sep 13 00:55:27.927686 systemd[1]: Started session-26.scope. Sep 13 00:55:28.028684 kubelet[1899]: I0913 00:55:28.028545 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hostproc\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.028887 kubelet[1899]: I0913 00:55:28.028868 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cni-path\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.029044 kubelet[1899]: I0913 00:55:28.029030 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-run\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.029227 kubelet[1899]: I0913 00:55:28.028669 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hostproc" (OuterVolumeSpecName: "hostproc") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.029334 kubelet[1899]: I0913 00:55:28.028990 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cni-path" (OuterVolumeSpecName: "cni-path") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.029433 kubelet[1899]: I0913 00:55:28.029163 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.029559 kubelet[1899]: I0913 00:55:28.029544 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-etc-cni-netd\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.029713 kubelet[1899]: I0913 00:55:28.029699 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-net\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.029857 kubelet[1899]: I0913 00:55:28.029844 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-bpf-maps\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.030015 kubelet[1899]: I0913 00:55:28.030002 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-clustermesh-secrets\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.030429 kubelet[1899]: I0913 00:55:28.030414 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-xtables-lock\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.030634 kubelet[1899]: I0913 00:55:28.030605 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-ipsec-secrets\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.030914 kubelet[1899]: I0913 00:55:28.030900 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-cgroup\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031086 kubelet[1899]: I0913 00:55:28.031070 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-kernel\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031253 kubelet[1899]: I0913 00:55:28.031239 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hubble-tls\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031512 kubelet[1899]: I0913 00:55:28.031498 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2cdn\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-kube-api-access-v2cdn\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031639 kubelet[1899]: I0913 00:55:28.031582 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-lib-modules\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031740 kubelet[1899]: I0913 00:55:28.031723 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-config-path\") pod \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\" (UID: \"ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3\") " Sep 13 00:55:28.031845 kubelet[1899]: I0913 00:55:28.031828 1899 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.031976 kubelet[1899]: I0913 00:55:28.031960 1899 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.032064 kubelet[1899]: I0913 00:55:28.032045 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.032239 kubelet[1899]: I0913 00:55:28.029666 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.032332 kubelet[1899]: I0913 00:55:28.029805 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.032463 kubelet[1899]: I0913 00:55:28.029956 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.032618 kubelet[1899]: I0913 00:55:28.030531 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.032762 kubelet[1899]: I0913 00:55:28.031000 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.032973 kubelet[1899]: I0913 00:55:28.031177 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.033109 kubelet[1899]: I0913 00:55:28.032645 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:55:28.033234 kubelet[1899]: I0913 00:55:28.032895 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:28.034140 systemd[1]: var-lib-kubelet-pods-ba25f50a\x2d90e6\x2d4ccf\x2db1cc\x2d7afb4cc047c3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:28.037914 systemd[1]: var-lib-kubelet-pods-ba25f50a\x2d90e6\x2d4ccf\x2db1cc\x2d7afb4cc047c3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:55:28.038605 kubelet[1899]: I0913 00:55:28.038565 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-kube-api-access-v2cdn" (OuterVolumeSpecName: "kube-api-access-v2cdn") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "kube-api-access-v2cdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:28.039157 kubelet[1899]: I0913 00:55:28.039130 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:28.039225 kubelet[1899]: I0913 00:55:28.039217 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:28.040858 kubelet[1899]: I0913 00:55:28.040824 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" (UID: "ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132638 1899 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132659 1899 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132668 1899 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132676 1899 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132682 1899 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132689 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132696 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.132686 kubelet[1899]: I0913 00:55:28.132703 1899 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.133038 kubelet[1899]: I0913 00:55:28.132710 1899 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.133038 kubelet[1899]: I0913 00:55:28.132717 1899 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2cdn\" (UniqueName: \"kubernetes.io/projected/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-kube-api-access-v2cdn\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.133038 kubelet[1899]: I0913 00:55:28.132724 1899 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.133038 kubelet[1899]: I0913 00:55:28.132731 1899 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:55:28.906125 systemd[1]: Removed slice kubepods-burstable-podba25f50a_90e6_4ccf_b1cc_7afb4cc047c3.slice. Sep 13 00:55:28.932576 systemd[1]: var-lib-kubelet-pods-ba25f50a\x2d90e6\x2d4ccf\x2db1cc\x2d7afb4cc047c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv2cdn.mount: Deactivated successfully. Sep 13 00:55:28.932716 systemd[1]: var-lib-kubelet-pods-ba25f50a\x2d90e6\x2d4ccf\x2db1cc\x2d7afb4cc047c3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:55:28.940317 systemd[1]: Created slice kubepods-burstable-podd33c936c_f2f4_45e3_8bae_45aac0b328a1.slice. Sep 13 00:55:29.036767 kubelet[1899]: I0913 00:55:29.036727 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-cni-path\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.036767 kubelet[1899]: I0913 00:55:29.036764 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp2gc\" (UniqueName: \"kubernetes.io/projected/d33c936c-f2f4-45e3-8bae-45aac0b328a1-kube-api-access-rp2gc\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.036767 kubelet[1899]: I0913 00:55:29.036781 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-bpf-maps\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036792 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-lib-modules\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036805 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-host-proc-sys-kernel\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036819 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-xtables-lock\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036831 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-etc-cni-netd\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036843 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d33c936c-f2f4-45e3-8bae-45aac0b328a1-clustermesh-secrets\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037184 kubelet[1899]: I0913 00:55:29.036858 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d33c936c-f2f4-45e3-8bae-45aac0b328a1-cilium-config-path\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.036872 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d33c936c-f2f4-45e3-8bae-45aac0b328a1-hubble-tls\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.036930 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-cilium-run\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.036970 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-cilium-cgroup\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.036987 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-host-proc-sys-net\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.037007 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d33c936c-f2f4-45e3-8bae-45aac0b328a1-cilium-ipsec-secrets\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.037347 kubelet[1899]: I0913 00:55:29.037023 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d33c936c-f2f4-45e3-8bae-45aac0b328a1-hostproc\") pod \"cilium-8w2jj\" (UID: \"d33c936c-f2f4-45e3-8bae-45aac0b328a1\") " pod="kube-system/cilium-8w2jj" Sep 13 00:55:29.243553 kubelet[1899]: E0913 00:55:29.243444 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:29.243938 env[1206]: time="2025-09-13T00:55:29.243905134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w2jj,Uid:d33c936c-f2f4-45e3-8bae-45aac0b328a1,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:29.615133 env[1206]: time="2025-09-13T00:55:29.615068537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:29.615133 env[1206]: time="2025-09-13T00:55:29.615106747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:29.615133 env[1206]: time="2025-09-13T00:55:29.615116806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:29.615359 env[1206]: time="2025-09-13T00:55:29.615302691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7 pid=3763 runtime=io.containerd.runc.v2 Sep 13 00:55:29.626032 systemd[1]: Started cri-containerd-4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7.scope. Sep 13 00:55:29.647949 env[1206]: time="2025-09-13T00:55:29.647905666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8w2jj,Uid:d33c936c-f2f4-45e3-8bae-45aac0b328a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\"" Sep 13 00:55:29.649132 kubelet[1899]: E0913 00:55:29.648889 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:29.651253 env[1206]: time="2025-09-13T00:55:29.651190283Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:55:29.664308 env[1206]: time="2025-09-13T00:55:29.664232811Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110\"" Sep 13 00:55:29.665016 env[1206]: time="2025-09-13T00:55:29.664970638Z" level=info msg="StartContainer for \"4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110\"" Sep 13 00:55:29.679984 systemd[1]: Started cri-containerd-4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110.scope. Sep 13 00:55:29.707042 env[1206]: time="2025-09-13T00:55:29.706986431Z" level=info msg="StartContainer for \"4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110\" returns successfully" Sep 13 00:55:29.712521 systemd[1]: cri-containerd-4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110.scope: Deactivated successfully. Sep 13 00:55:29.733481 kubelet[1899]: I0913 00:55:29.733437 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3" path="/var/lib/kubelet/pods/ba25f50a-90e6-4ccf-b1cc-7afb4cc047c3/volumes" Sep 13 00:55:29.740339 env[1206]: time="2025-09-13T00:55:29.740287500Z" level=info msg="shim disconnected" id=4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110 Sep 13 00:55:29.740339 env[1206]: time="2025-09-13T00:55:29.740330520Z" level=warning msg="cleaning up after shim disconnected" id=4b279094f5b37b7633058792bbef25b2415ddf94c0cc886658854059661e4110 namespace=k8s.io Sep 13 00:55:29.740339 env[1206]: time="2025-09-13T00:55:29.740339988Z" level=info msg="cleaning up dead shim" Sep 13 00:55:29.747071 env[1206]: time="2025-09-13T00:55:29.747032964Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\n" Sep 13 00:55:29.906176 kubelet[1899]: E0913 00:55:29.906029 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:29.914244 env[1206]: time="2025-09-13T00:55:29.909526562Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:55:29.922519 env[1206]: time="2025-09-13T00:55:29.922134342Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c\"" Sep 13 00:55:29.929230 env[1206]: time="2025-09-13T00:55:29.924471665Z" level=info msg="StartContainer for \"1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c\"" Sep 13 00:55:29.947714 systemd[1]: Started cri-containerd-1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c.scope. Sep 13 00:55:29.976524 systemd[1]: cri-containerd-1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c.scope: Deactivated successfully. Sep 13 00:55:30.117674 env[1206]: time="2025-09-13T00:55:30.117585545Z" level=info msg="StartContainer for \"1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c\" returns successfully" Sep 13 00:55:30.137189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c-rootfs.mount: Deactivated successfully. Sep 13 00:55:30.345665 env[1206]: time="2025-09-13T00:55:30.345527646Z" level=info msg="shim disconnected" id=1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c Sep 13 00:55:30.345665 env[1206]: time="2025-09-13T00:55:30.345575335Z" level=warning msg="cleaning up after shim disconnected" id=1e7c78952715fad985cf76d93bbaa77502325eac592fc9fa995d478ec198765c namespace=k8s.io Sep 13 00:55:30.345665 env[1206]: time="2025-09-13T00:55:30.345584282Z" level=info msg="cleaning up dead shim" Sep 13 00:55:30.352089 env[1206]: time="2025-09-13T00:55:30.352053494Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" Sep 13 00:55:30.769874 kubelet[1899]: E0913 00:55:30.769828 1899 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:55:30.909320 kubelet[1899]: E0913 00:55:30.909256 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:30.911330 env[1206]: time="2025-09-13T00:55:30.910997469Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:55:30.925450 env[1206]: time="2025-09-13T00:55:30.925386496Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e\"" Sep 13 00:55:30.925889 env[1206]: time="2025-09-13T00:55:30.925868812Z" level=info msg="StartContainer for \"99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e\"" Sep 13 00:55:30.941777 systemd[1]: run-containerd-runc-k8s.io-99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e-runc.vMNqLp.mount: Deactivated successfully. Sep 13 00:55:30.945225 systemd[1]: Started cri-containerd-99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e.scope. Sep 13 00:55:30.973284 systemd[1]: cri-containerd-99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e.scope: Deactivated successfully. Sep 13 00:55:30.976381 env[1206]: time="2025-09-13T00:55:30.976335080Z" level=info msg="StartContainer for \"99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e\" returns successfully" Sep 13 00:55:30.996341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e-rootfs.mount: Deactivated successfully. Sep 13 00:55:31.000506 env[1206]: time="2025-09-13T00:55:31.000465496Z" level=info msg="shim disconnected" id=99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e Sep 13 00:55:31.000606 env[1206]: time="2025-09-13T00:55:31.000510320Z" level=warning msg="cleaning up after shim disconnected" id=99dfd931c4ff71ba0194dabeb485f4f1366d62b7d4c03df051ed468f4b88529e namespace=k8s.io Sep 13 00:55:31.000606 env[1206]: time="2025-09-13T00:55:31.000518935Z" level=info msg="cleaning up dead shim" Sep 13 00:55:31.006444 env[1206]: time="2025-09-13T00:55:31.006398851Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" Sep 13 00:55:31.731896 kubelet[1899]: E0913 00:55:31.731845 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:31.913262 kubelet[1899]: E0913 00:55:31.913224 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:31.915045 env[1206]: time="2025-09-13T00:55:31.914999209Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:55:31.927585 env[1206]: time="2025-09-13T00:55:31.927524274Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6\"" Sep 13 00:55:31.928056 env[1206]: time="2025-09-13T00:55:31.928020908Z" level=info msg="StartContainer for \"b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6\"" Sep 13 00:55:31.948360 systemd[1]: Started cri-containerd-b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6.scope. Sep 13 00:55:31.971527 systemd[1]: cri-containerd-b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6.scope: Deactivated successfully. Sep 13 00:55:31.973059 env[1206]: time="2025-09-13T00:55:31.972995907Z" level=info msg="StartContainer for \"b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6\" returns successfully" Sep 13 00:55:31.993023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6-rootfs.mount: Deactivated successfully. Sep 13 00:55:32.085788 env[1206]: time="2025-09-13T00:55:32.085735215Z" level=info msg="shim disconnected" id=b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6 Sep 13 00:55:32.085788 env[1206]: time="2025-09-13T00:55:32.085782654Z" level=warning msg="cleaning up after shim disconnected" id=b9ae875d300f920adf316e1c452b3547a62d53d82bbdefbeeed742747d0a32e6 namespace=k8s.io Sep 13 00:55:32.085788 env[1206]: time="2025-09-13T00:55:32.085790979Z" level=info msg="cleaning up dead shim" Sep 13 00:55:32.096587 env[1206]: time="2025-09-13T00:55:32.096521568Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4019 runtime=io.containerd.runc.v2\n" Sep 13 00:55:32.916724 kubelet[1899]: E0913 00:55:32.916690 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:32.918689 env[1206]: time="2025-09-13T00:55:32.918651385Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:55:32.933417 env[1206]: time="2025-09-13T00:55:32.933330571Z" level=info msg="CreateContainer within sandbox \"4d278565eef8e9932cd4f38cd5e5ceef7871829918f0929c0a296d770e0debd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878\"" Sep 13 00:55:32.934943 env[1206]: time="2025-09-13T00:55:32.934896007Z" level=info msg="StartContainer for \"61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878\"" Sep 13 00:55:32.951906 systemd[1]: run-containerd-runc-k8s.io-61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878-runc.EU1Bkz.mount: Deactivated successfully. Sep 13 00:55:32.953560 systemd[1]: Started cri-containerd-61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878.scope. Sep 13 00:55:32.985214 env[1206]: time="2025-09-13T00:55:32.985144193Z" level=info msg="StartContainer for \"61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878\" returns successfully" Sep 13 00:55:33.267229 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:55:33.920176 kubelet[1899]: E0913 00:55:33.920137 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:33.931981 kubelet[1899]: I0913 00:55:33.931917 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8w2jj" podStartSLOduration=5.931899122 podStartE2EDuration="5.931899122s" podCreationTimestamp="2025-09-13 00:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:33.9316464 +0000 UTC m=+88.280151419" watchObservedRunningTime="2025-09-13 00:55:33.931899122 +0000 UTC m=+88.280404141" Sep 13 00:55:33.932988 systemd[1]: run-containerd-runc-k8s.io-61d87c13e878f97f2eb538371ac96c650b78d1e4d35308f54ff4ca01e708e878-runc.axWbRd.mount: Deactivated successfully. Sep 13 00:55:34.731574 kubelet[1899]: E0913 00:55:34.731527 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:35.244599 kubelet[1899]: E0913 00:55:35.244540 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:35.797667 systemd-networkd[1022]: lxc_health: Link UP Sep 13 00:55:35.804108 systemd-networkd[1022]: lxc_health: Gained carrier Sep 13 00:55:35.804229 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:55:37.244992 kubelet[1899]: E0913 00:55:37.244945 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:37.678440 systemd-networkd[1022]: lxc_health: Gained IPv6LL Sep 13 00:55:37.929799 kubelet[1899]: E0913 00:55:37.929661 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:38.931434 kubelet[1899]: E0913 00:55:38.931382 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:42.775741 sshd[3732]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:42.778300 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:55108.service: Deactivated successfully. Sep 13 00:55:42.779003 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:55:42.779587 systemd-logind[1191]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:55:42.780259 systemd-logind[1191]: Removed session 26. Sep 13 00:55:43.731379 kubelet[1899]: E0913 00:55:43.731342 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"