Aug 13 00:59:57.207716 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:59:57.207750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:59:57.207761 kernel: BIOS-provided physical RAM map: Aug 13 00:59:57.207781 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:59:57.207789 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:59:57.207796 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:59:57.207806 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 00:59:57.207814 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 00:59:57.207823 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:59:57.207831 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:59:57.207839 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:59:57.207846 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:59:57.207854 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:59:57.207862 kernel: NX (Execute Disable) protection: active Aug 13 00:59:57.207873 kernel: SMBIOS 2.8 present. Aug 13 00:59:57.207882 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 00:59:57.207890 kernel: Hypervisor detected: KVM Aug 13 00:59:57.207907 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:59:57.207919 kernel: kvm-clock: cpu 0, msr 5a19e001, primary cpu clock Aug 13 00:59:57.207927 kernel: kvm-clock: using sched offset of 3831625572 cycles Aug 13 00:59:57.207936 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:59:57.207945 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:59:57.207954 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:59:57.207966 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:59:57.207974 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 00:59:57.207983 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:59:57.207992 kernel: Using GB pages for direct mapping Aug 13 00:59:57.208000 kernel: ACPI: Early table checksum verification disabled Aug 13 00:59:57.208009 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 00:59:57.208017 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208026 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208035 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208045 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 00:59:57.208054 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208062 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208071 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208080 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:59:57.208088 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 00:59:57.208097 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 00:59:57.208106 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 00:59:57.208119 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 00:59:57.208128 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 00:59:57.208137 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 00:59:57.208147 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 00:59:57.208155 kernel: No NUMA configuration found Aug 13 00:59:57.208165 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 00:59:57.208175 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 00:59:57.208185 kernel: Zone ranges: Aug 13 00:59:57.208194 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:59:57.208203 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 00:59:57.208212 kernel: Normal empty Aug 13 00:59:57.208221 kernel: Movable zone start for each node Aug 13 00:59:57.208230 kernel: Early memory node ranges Aug 13 00:59:57.208239 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:59:57.208248 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 00:59:57.208258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 00:59:57.208271 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:59:57.208280 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:59:57.208289 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 00:59:57.208299 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:59:57.208308 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:59:57.208317 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:59:57.208326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:59:57.208335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:59:57.208344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:59:57.208359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:59:57.208369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:59:57.208378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:59:57.208387 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:59:57.208396 kernel: TSC deadline timer available Aug 13 00:59:57.208405 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:59:57.208414 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:59:57.208423 kernel: kvm-guest: setup PV sched yield Aug 13 00:59:57.208432 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:59:57.208443 kernel: Booting paravirtualized kernel on KVM Aug 13 00:59:57.208452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:59:57.208462 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:59:57.208471 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:59:57.208480 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:59:57.208489 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:59:57.208498 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:59:57.208507 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Aug 13 00:59:57.208516 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:59:57.208527 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:59:57.208536 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 00:59:57.208545 kernel: Policy zone: DMA32 Aug 13 00:59:57.208555 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:59:57.208565 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:59:57.208574 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:59:57.208584 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:59:57.208593 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:59:57.208605 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 134796K reserved, 0K cma-reserved) Aug 13 00:59:57.208614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:59:57.208655 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:59:57.208676 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:59:57.208685 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:59:57.208695 kernel: rcu: RCU event tracing is enabled. Aug 13 00:59:57.208705 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:59:57.208714 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:59:57.208723 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:59:57.208737 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:59:57.208747 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:59:57.208756 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:59:57.208765 kernel: random: crng init done Aug 13 00:59:57.208788 kernel: Console: colour VGA+ 80x25 Aug 13 00:59:57.208797 kernel: printk: console [ttyS0] enabled Aug 13 00:59:57.208806 kernel: ACPI: Core revision 20210730 Aug 13 00:59:57.208815 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:59:57.208825 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:59:57.208836 kernel: x2apic enabled Aug 13 00:59:57.208845 kernel: Switched APIC routing to physical x2apic. Aug 13 00:59:57.208857 kernel: kvm-guest: setup PV IPIs Aug 13 00:59:57.208866 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:59:57.208875 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:59:57.208887 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:59:57.208916 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:59:57.208936 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:59:57.208946 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:59:57.208965 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:59:57.208975 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:59:57.208986 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:59:57.208996 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:59:57.209005 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:59:57.209015 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:59:57.209025 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:59:57.209035 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:59:57.209045 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:59:57.209056 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:59:57.209066 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:59:57.209075 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:59:57.209084 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:59:57.209094 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:59:57.209103 kernel: LSM: Security Framework initializing Aug 13 00:59:57.209112 kernel: SELinux: Initializing. Aug 13 00:59:57.209124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:59:57.209133 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:59:57.209142 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:59:57.209152 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:59:57.209162 kernel: ... version: 0 Aug 13 00:59:57.209171 kernel: ... bit width: 48 Aug 13 00:59:57.209180 kernel: ... generic registers: 6 Aug 13 00:59:57.209190 kernel: ... value mask: 0000ffffffffffff Aug 13 00:59:57.209200 kernel: ... max period: 00007fffffffffff Aug 13 00:59:57.209211 kernel: ... fixed-purpose events: 0 Aug 13 00:59:57.209221 kernel: ... event mask: 000000000000003f Aug 13 00:59:57.209230 kernel: signal: max sigframe size: 1776 Aug 13 00:59:57.209240 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:59:57.209249 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:59:57.209259 kernel: x86: Booting SMP configuration: Aug 13 00:59:57.209268 kernel: .... node #0, CPUs: #1 Aug 13 00:59:57.209278 kernel: kvm-clock: cpu 1, msr 5a19e041, secondary cpu clock Aug 13 00:59:57.209287 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:59:57.209298 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Aug 13 00:59:57.209308 kernel: #2 Aug 13 00:59:57.209318 kernel: kvm-clock: cpu 2, msr 5a19e081, secondary cpu clock Aug 13 00:59:57.209327 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:59:57.209337 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Aug 13 00:59:57.209346 kernel: #3 Aug 13 00:59:57.209360 kernel: kvm-clock: cpu 3, msr 5a19e0c1, secondary cpu clock Aug 13 00:59:57.209369 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:59:57.209379 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Aug 13 00:59:57.209388 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:59:57.209399 kernel: smpboot: Max logical packages: 1 Aug 13 00:59:57.209409 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:59:57.209419 kernel: devtmpfs: initialized Aug 13 00:59:57.209428 kernel: x86/mm: Memory block size: 128MB Aug 13 00:59:57.209438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:59:57.209448 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:59:57.209458 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:59:57.209467 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:59:57.209477 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:59:57.209488 kernel: audit: type=2000 audit(1755046796.188:1): state=initialized audit_enabled=0 res=1 Aug 13 00:59:57.209498 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:59:57.209507 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:59:57.209517 kernel: cpuidle: using governor menu Aug 13 00:59:57.209526 kernel: ACPI: bus type PCI registered Aug 13 00:59:57.209535 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:59:57.209545 kernel: dca service started, version 1.12.1 Aug 13 00:59:57.209555 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:59:57.209565 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:59:57.209577 kernel: PCI: Using configuration type 1 for base access Aug 13 00:59:57.209586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:59:57.209596 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:59:57.209606 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:59:57.209615 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:59:57.209625 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:59:57.209634 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:59:57.209644 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:59:57.209653 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:59:57.209665 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:59:57.209674 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:59:57.209684 kernel: ACPI: Interpreter enabled Aug 13 00:59:57.209693 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:59:57.209703 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:59:57.209713 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:59:57.209723 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:59:57.209732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:59:57.210002 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:59:57.210112 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:59:57.210207 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:59:57.210219 kernel: PCI host bridge to bus 0000:00 Aug 13 00:59:57.211545 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:59:57.211692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:59:57.211854 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:59:57.212002 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:59:57.212147 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:59:57.212245 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 00:59:57.212371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:59:57.212576 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:59:57.212741 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:59:57.212933 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:59:57.213038 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:59:57.213133 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:59:57.213231 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:59:57.213346 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:59:57.213450 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 00:59:57.213557 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:59:57.213662 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:59:57.213826 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:59:57.213947 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:59:57.214050 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:59:57.214150 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:59:57.214271 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:59:57.214379 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 00:59:57.214489 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:59:57.214588 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 00:59:57.214691 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:59:57.214855 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:59:57.214968 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:59:57.215084 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:59:57.215184 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 00:59:57.215283 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 00:59:57.215406 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:59:57.215506 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:59:57.215521 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:59:57.215531 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:59:57.215541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:59:57.215551 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:59:57.215560 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:59:57.215573 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:59:57.215583 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:59:57.215593 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:59:57.215602 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:59:57.215612 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:59:57.215621 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:59:57.215631 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:59:57.215641 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:59:57.215650 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:59:57.215662 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:59:57.215671 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:59:57.215681 kernel: iommu: Default domain type: Translated Aug 13 00:59:57.215691 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:59:57.215811 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:59:57.215920 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:59:57.216017 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:59:57.216030 kernel: vgaarb: loaded Aug 13 00:59:57.216044 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:59:57.216054 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:59:57.216064 kernel: PTP clock support registered Aug 13 00:59:57.216073 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:59:57.216083 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:59:57.216093 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:59:57.216102 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 00:59:57.216112 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:59:57.216122 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:59:57.216133 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:59:57.216143 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:59:57.216153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:59:57.216162 kernel: pnp: PnP ACPI init Aug 13 00:59:57.216305 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:59:57.216321 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:59:57.216331 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:59:57.216341 kernel: NET: Registered PF_INET protocol family Aug 13 00:59:57.216354 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:59:57.216364 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:59:57.216374 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:59:57.216384 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:59:57.216393 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:59:57.216402 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:59:57.216412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:59:57.216422 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:59:57.216431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:59:57.216444 kernel: NET: Registered PF_XDP protocol family Aug 13 00:59:57.216539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:59:57.216624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:59:57.216707 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:59:57.216812 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:59:57.216907 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:59:57.216992 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 00:59:57.217005 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:59:57.217018 kernel: Initialise system trusted keyrings Aug 13 00:59:57.217028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:59:57.217038 kernel: Key type asymmetric registered Aug 13 00:59:57.217048 kernel: Asymmetric key parser 'x509' registered Aug 13 00:59:57.217057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:59:57.217067 kernel: io scheduler mq-deadline registered Aug 13 00:59:57.217076 kernel: io scheduler kyber registered Aug 13 00:59:57.217086 kernel: io scheduler bfq registered Aug 13 00:59:57.217095 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:59:57.217108 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:59:57.217118 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:59:57.217128 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:59:57.217138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:59:57.217148 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:59:57.217158 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:59:57.217168 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:59:57.217178 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:59:57.217307 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:59:57.217324 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:59:57.217413 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:59:57.217501 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:59:56 UTC (1755046796) Aug 13 00:59:57.217586 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:59:57.217598 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:59:57.217608 kernel: Segment Routing with IPv6 Aug 13 00:59:57.217618 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:59:57.217628 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:59:57.217641 kernel: Key type dns_resolver registered Aug 13 00:59:57.217650 kernel: IPI shorthand broadcast: enabled Aug 13 00:59:57.217666 kernel: sched_clock: Marking stable (451001483, 117327294)->(683109330, -114780553) Aug 13 00:59:57.217704 kernel: registered taskstats version 1 Aug 13 00:59:57.217725 kernel: Loading compiled-in X.509 certificates Aug 13 00:59:57.217736 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:59:57.217745 kernel: Key type .fscrypt registered Aug 13 00:59:57.217755 kernel: Key type fscrypt-provisioning registered Aug 13 00:59:57.217765 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:59:57.217803 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:59:57.217813 kernel: ima: No architecture policies found Aug 13 00:59:57.217823 kernel: clk: Disabling unused clocks Aug 13 00:59:57.217833 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:59:57.217842 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:59:57.217852 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:59:57.217862 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:59:57.217872 kernel: Run /init as init process Aug 13 00:59:57.217881 kernel: with arguments: Aug 13 00:59:57.217893 kernel: /init Aug 13 00:59:57.217911 kernel: with environment: Aug 13 00:59:57.217920 kernel: HOME=/ Aug 13 00:59:57.217930 kernel: TERM=linux Aug 13 00:59:57.217939 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:59:57.217956 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:59:57.217969 systemd[1]: Detected virtualization kvm. Aug 13 00:59:57.217980 systemd[1]: Detected architecture x86-64. Aug 13 00:59:57.217992 systemd[1]: Running in initrd. Aug 13 00:59:57.218002 systemd[1]: No hostname configured, using default hostname. Aug 13 00:59:57.218012 systemd[1]: Hostname set to . Aug 13 00:59:57.218030 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:59:57.218040 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:59:57.218051 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:59:57.218061 systemd[1]: Reached target cryptsetup.target. Aug 13 00:59:57.218071 systemd[1]: Reached target paths.target. Aug 13 00:59:57.218084 systemd[1]: Reached target slices.target. Aug 13 00:59:57.218094 systemd[1]: Reached target swap.target. Aug 13 00:59:57.218112 systemd[1]: Reached target timers.target. Aug 13 00:59:57.218125 systemd[1]: Listening on iscsid.socket. Aug 13 00:59:57.218136 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:59:57.218148 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:59:57.218159 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:59:57.218169 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:59:57.218180 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:59:57.218191 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:59:57.218201 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:59:57.218212 systemd[1]: Reached target sockets.target. Aug 13 00:59:57.218222 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:59:57.218233 systemd[1]: Finished network-cleanup.service. Aug 13 00:59:57.218246 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:59:57.218256 systemd[1]: Starting systemd-journald.service... Aug 13 00:59:57.218267 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:59:57.218278 systemd[1]: Starting systemd-resolved.service... Aug 13 00:59:57.218288 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:59:57.218299 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:59:57.218309 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:59:57.218320 kernel: audit: type=1130 audit(1755046797.208:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.218331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:59:57.218351 systemd-journald[199]: Journal started Aug 13 00:59:57.218407 systemd-journald[199]: Runtime Journal (/run/log/journal/b060213bf4ba446cb9402f2ab1f981a2) is 6.0M, max 48.5M, 42.5M free. Aug 13 00:59:57.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.208960 systemd-modules-load[200]: Inserted module 'overlay' Aug 13 00:59:57.220279 systemd-resolved[201]: Positive Trust Anchors: Aug 13 00:59:57.220287 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:59:57.220315 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:59:57.222970 systemd-resolved[201]: Defaulting to hostname 'linux'. Aug 13 00:59:57.254808 systemd[1]: Started systemd-journald.service. Aug 13 00:59:57.257815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:59:57.257854 kernel: audit: type=1130 audit(1755046797.257:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.259100 systemd[1]: Started systemd-resolved.service. Aug 13 00:59:57.262513 kernel: audit: type=1130 audit(1755046797.260:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.261782 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:59:57.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.267354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:59:57.275370 kernel: audit: type=1130 audit(1755046797.266:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.275416 kernel: Bridge firewalling registered Aug 13 00:59:57.275428 kernel: audit: type=1130 audit(1755046797.270:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.270664 systemd-modules-load[200]: Inserted module 'br_netfilter' Aug 13 00:59:57.271470 systemd[1]: Reached target nss-lookup.target. Aug 13 00:59:57.276996 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:59:57.293792 kernel: SCSI subsystem initialized Aug 13 00:59:57.293836 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:59:57.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.294872 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:59:57.298932 kernel: audit: type=1130 audit(1755046797.293:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.305265 dracut-cmdline[218]: dracut-dracut-053 Aug 13 00:59:57.308745 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:59:57.308762 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:59:57.308780 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:59:57.309690 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:59:57.314437 systemd-modules-load[200]: Inserted module 'dm_multipath' Aug 13 00:59:57.315110 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:59:57.319456 kernel: audit: type=1130 audit(1755046797.315:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.316489 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:59:57.326888 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:59:57.330934 kernel: audit: type=1130 audit(1755046797.326:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.380814 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:59:57.398807 kernel: iscsi: registered transport (tcp) Aug 13 00:59:57.422819 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:59:57.422901 kernel: QLogic iSCSI HBA Driver Aug 13 00:59:57.447657 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:59:57.452882 kernel: audit: type=1130 audit(1755046797.447:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.449578 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:59:57.533818 kernel: raid6: avx2x4 gen() 27282 MB/s Aug 13 00:59:57.550801 kernel: raid6: avx2x4 xor() 7136 MB/s Aug 13 00:59:57.567805 kernel: raid6: avx2x2 gen() 31211 MB/s Aug 13 00:59:57.584826 kernel: raid6: avx2x2 xor() 18658 MB/s Aug 13 00:59:57.605823 kernel: raid6: avx2x1 gen() 25634 MB/s Aug 13 00:59:57.644800 kernel: raid6: avx2x1 xor() 15358 MB/s Aug 13 00:59:57.661808 kernel: raid6: sse2x4 gen() 13920 MB/s Aug 13 00:59:57.678791 kernel: raid6: sse2x4 xor() 7127 MB/s Aug 13 00:59:57.718817 kernel: raid6: sse2x2 gen() 15870 MB/s Aug 13 00:59:57.735807 kernel: raid6: sse2x2 xor() 9420 MB/s Aug 13 00:59:57.752805 kernel: raid6: sse2x1 gen() 11956 MB/s Aug 13 00:59:57.770150 kernel: raid6: sse2x1 xor() 7404 MB/s Aug 13 00:59:57.770178 kernel: raid6: using algorithm avx2x2 gen() 31211 MB/s Aug 13 00:59:57.770191 kernel: raid6: .... xor() 18658 MB/s, rmw enabled Aug 13 00:59:57.770853 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:59:57.784800 kernel: xor: automatically using best checksumming function avx Aug 13 00:59:57.878808 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:59:57.887915 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:59:57.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.888000 audit: BPF prog-id=7 op=LOAD Aug 13 00:59:57.888000 audit: BPF prog-id=8 op=LOAD Aug 13 00:59:57.889900 systemd[1]: Starting systemd-udevd.service... Aug 13 00:59:57.901888 systemd-udevd[401]: Using default interface naming scheme 'v252'. Aug 13 00:59:57.906125 systemd[1]: Started systemd-udevd.service. Aug 13 00:59:57.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.907954 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:59:57.923593 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 00:59:57.946991 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:59:57.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:57.948550 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:59:57.985663 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:59:57.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:58.050804 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:59:58.050869 kernel: libata version 3.00 loaded. Aug 13 00:59:58.061794 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:59:58.072269 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:59:58.072287 kernel: AES CTR mode by8 optimization enabled Aug 13 00:59:58.072298 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:59:58.073184 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:59:58.073202 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:59:58.073295 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:59:58.073373 kernel: scsi host0: ahci Aug 13 00:59:58.138794 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:59:58.138825 kernel: GPT:9289727 != 19775487 Aug 13 00:59:58.138836 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:59:58.138845 kernel: GPT:9289727 != 19775487 Aug 13 00:59:58.138867 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:59:58.138886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:59:58.138898 kernel: scsi host1: ahci Aug 13 00:59:58.139012 kernel: scsi host2: ahci Aug 13 00:59:58.139111 kernel: scsi host3: ahci Aug 13 00:59:58.139214 kernel: scsi host4: ahci Aug 13 00:59:58.139306 kernel: scsi host5: ahci Aug 13 00:59:58.139433 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 00:59:58.139446 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 00:59:58.139459 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 00:59:58.139471 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 00:59:58.139483 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 00:59:58.139496 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 00:59:58.151829 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Aug 13 00:59:58.153811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:59:58.190504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:59:58.198059 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:59:58.207448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:59:58.214126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:59:58.218360 systemd[1]: Starting disk-uuid.service... Aug 13 00:59:58.393478 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:59:58.393558 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:59:58.394483 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:59:58.394533 kernel: ata3.00: applying bridge limits Aug 13 00:59:58.395801 kernel: ata3.00: configured for UDMA/100 Aug 13 00:59:58.468802 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:59:58.468885 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:59:58.469803 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:59:58.470806 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:59:58.471806 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:59:58.532823 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:59:58.550520 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:59:58.550550 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:59:58.630654 disk-uuid[535]: Primary Header is updated. Aug 13 00:59:58.630654 disk-uuid[535]: Secondary Entries is updated. Aug 13 00:59:58.630654 disk-uuid[535]: Secondary Header is updated. Aug 13 00:59:58.634791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:59:58.638805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:59:59.693805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:59:59.694173 disk-uuid[549]: The operation has completed successfully. Aug 13 00:59:59.715581 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:59:59.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:59.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:59.715678 systemd[1]: Finished disk-uuid.service. Aug 13 00:59:59.774155 systemd[1]: Starting verity-setup.service... Aug 13 00:59:59.804801 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:59:59.824728 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:59:59.826961 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:59:59.829022 systemd[1]: Finished verity-setup.service. Aug 13 00:59:59.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:59:59.940798 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:59:59.940763 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:59:59.941191 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:59:59.942219 systemd[1]: Starting ignition-setup.service... Aug 13 00:59:59.944799 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:59:59.964605 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:59:59.964642 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:59:59.964652 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:59:59.973888 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:00:00.023479 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:00:00.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.043000 audit: BPF prog-id=9 op=LOAD Aug 13 01:00:00.044882 systemd[1]: Starting systemd-networkd.service... Aug 13 01:00:00.066471 systemd-networkd[718]: lo: Link UP Aug 13 01:00:00.066481 systemd-networkd[718]: lo: Gained carrier Aug 13 01:00:00.066993 systemd-networkd[718]: Enumeration completed Aug 13 01:00:00.067084 systemd[1]: Started systemd-networkd.service. Aug 13 01:00:00.067234 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:00:00.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.092847 systemd-networkd[718]: eth0: Link UP Aug 13 01:00:00.092851 systemd-networkd[718]: eth0: Gained carrier Aug 13 01:00:00.132208 systemd[1]: Reached target network.target. Aug 13 01:00:00.133419 systemd[1]: Starting iscsiuio.service... Aug 13 01:00:00.188429 systemd[1]: Started iscsiuio.service. Aug 13 01:00:00.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.189543 systemd[1]: Starting iscsid.service... Aug 13 01:00:00.192856 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 01:00:00.206858 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:00:00.206858 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:00:00.206858 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:00:00.206858 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:00:00.206858 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:00:00.206858 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:00:00.235637 systemd[1]: Started iscsid.service. Aug 13 01:00:00.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.237242 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:00:00.237872 systemd[1]: Finished ignition-setup.service. Aug 13 01:00:00.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.240522 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:00:00.249547 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:00:00.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.250113 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:00:00.251480 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:00:00.251799 systemd[1]: Reached target remote-fs.target. Aug 13 01:00:00.255675 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:00:00.262969 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:00:00.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.326082 ignition[726]: Ignition 2.14.0 Aug 13 01:00:00.326094 ignition[726]: Stage: fetch-offline Aug 13 01:00:00.326174 ignition[726]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:00.326185 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:00.326325 ignition[726]: parsed url from cmdline: "" Aug 13 01:00:00.326329 ignition[726]: no config URL provided Aug 13 01:00:00.326334 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:00:00.326341 ignition[726]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:00:00.327015 ignition[726]: op(1): [started] loading QEMU firmware config module Aug 13 01:00:00.327022 ignition[726]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 01:00:00.334147 ignition[726]: op(1): [finished] loading QEMU firmware config module Aug 13 01:00:00.372198 ignition[726]: parsing config with SHA512: 3b016d68a09f2da82d15fedfecf23f2efe0f5b8a138fecba6fcf42047cf8148baeb3a7f485bb218cbd91fa046f969e6d5f874f2d35239b7f3389680b06bdf9b7 Aug 13 01:00:00.380884 unknown[726]: fetched base config from "system" Aug 13 01:00:00.380899 unknown[726]: fetched user config from "qemu" Aug 13 01:00:00.381507 ignition[726]: fetch-offline: fetch-offline passed Aug 13 01:00:00.381579 ignition[726]: Ignition finished successfully Aug 13 01:00:00.385149 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:00:00.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.385717 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 01:00:00.420744 systemd[1]: Starting ignition-kargs.service... Aug 13 01:00:00.460433 ignition[746]: Ignition 2.14.0 Aug 13 01:00:00.460442 ignition[746]: Stage: kargs Aug 13 01:00:00.460567 ignition[746]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:00.460576 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:00.461941 ignition[746]: kargs: kargs passed Aug 13 01:00:00.461977 ignition[746]: Ignition finished successfully Aug 13 01:00:00.465964 systemd[1]: Finished ignition-kargs.service. Aug 13 01:00:00.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.468427 systemd[1]: Starting ignition-disks.service... Aug 13 01:00:00.479449 ignition[752]: Ignition 2.14.0 Aug 13 01:00:00.479459 ignition[752]: Stage: disks Aug 13 01:00:00.479554 ignition[752]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:00.479563 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:00.480549 ignition[752]: disks: disks passed Aug 13 01:00:00.480587 ignition[752]: Ignition finished successfully Aug 13 01:00:00.485196 systemd[1]: Finished ignition-disks.service. Aug 13 01:00:00.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.485517 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:00:00.488342 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:00:00.488730 systemd[1]: Reached target local-fs.target. Aug 13 01:00:00.490232 systemd[1]: Reached target sysinit.target. Aug 13 01:00:00.491918 systemd[1]: Reached target basic.target. Aug 13 01:00:00.494138 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:00:00.531024 systemd-fsck[760]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 01:00:00.767513 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:00:00.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.769007 systemd[1]: Mounting sysroot.mount... Aug 13 01:00:00.784804 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:00:00.785256 systemd[1]: Mounted sysroot.mount. Aug 13 01:00:00.786005 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:00:00.792974 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:00:00.794043 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:00:00.794072 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:00:00.794092 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:00:00.796546 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:00:00.798273 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:00:00.803275 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:00:00.806638 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:00:00.810446 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:00:00.814522 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:00:00.836391 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:00:00.841340 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:00:00.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.843701 systemd[1]: Starting ignition-mount.service... Aug 13 01:00:00.849538 bash[811]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 01:00:00.852820 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Aug 13 01:00:00.853408 systemd[1]: Starting sysroot-boot.service... Aug 13 01:00:00.857430 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:00:00.857463 kernel: BTRFS info (device vda6): using free space tree Aug 13 01:00:00.857477 kernel: BTRFS info (device vda6): has skinny extents Aug 13 01:00:00.860846 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:00:00.874821 systemd[1]: Finished sysroot-boot.service. Aug 13 01:00:00.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.879430 ignition[832]: INFO : Ignition 2.14.0 Aug 13 01:00:00.879430 ignition[832]: INFO : Stage: mount Aug 13 01:00:00.881078 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:00.881078 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:00.881078 ignition[832]: INFO : mount: mount passed Aug 13 01:00:00.881078 ignition[832]: INFO : Ignition finished successfully Aug 13 01:00:00.885766 systemd[1]: Finished ignition-mount.service. Aug 13 01:00:00.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:00.887201 systemd[1]: Starting ignition-files.service... Aug 13 01:00:00.901149 ignition[842]: INFO : Ignition 2.14.0 Aug 13 01:00:00.901149 ignition[842]: INFO : Stage: files Aug 13 01:00:00.947032 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:00.947032 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:00.949348 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:00:00.951043 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:00:00.951043 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:00:00.953861 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:00:00.955223 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:00:00.957075 unknown[842]: wrote ssh authorized keys file for user: core Aug 13 01:00:01.027390 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:00:01.029204 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:00:01.031301 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 01:00:01.089024 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:00:01.264752 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:00:01.289563 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:00:01.289563 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:00:01.547803 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:00:01.699291 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:00:01.699291 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:00:01.702938 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 01:00:01.824934 systemd-networkd[718]: eth0: Gained IPv6LL Aug 13 01:00:02.170746 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:00:02.788618 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:00:02.788618 ignition[842]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 01:00:02.792859 ignition[842]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:00:02.844142 ignition[842]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:00:02.859032 ignition[842]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 01:00:02.859032 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:00:02.859032 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:00:02.859032 ignition[842]: INFO : files: files passed Aug 13 01:00:02.859032 ignition[842]: INFO : Ignition finished successfully Aug 13 01:00:02.865963 systemd[1]: Finished ignition-files.service. Aug 13 01:00:02.900893 kernel: kauditd_printk_skb: 24 callbacks suppressed Aug 13 01:00:02.900922 kernel: audit: type=1130 audit(1755046802.865:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.900905 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:00:02.901328 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:00:02.902042 systemd[1]: Starting ignition-quench.service... Aug 13 01:00:02.905122 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:00:02.913018 kernel: audit: type=1130 audit(1755046802.905:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.913042 kernel: audit: type=1131 audit(1755046802.905:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.905208 systemd[1]: Finished ignition-quench.service. Aug 13 01:00:02.916985 initrd-setup-root-after-ignition[868]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 01:00:02.919803 initrd-setup-root-after-ignition[870]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:00:02.921691 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:00:02.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.923650 systemd[1]: Reached target ignition-complete.target. Aug 13 01:00:02.928270 kernel: audit: type=1130 audit(1755046802.922:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.928868 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:00:02.941785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:00:02.942750 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:00:02.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.944341 systemd[1]: Reached target initrd-fs.target. Aug 13 01:00:02.967224 kernel: audit: type=1130 audit(1755046802.943:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.967250 kernel: audit: type=1131 audit(1755046802.943:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.967174 systemd[1]: Reached target initrd.target. Aug 13 01:00:02.968606 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:00:02.970448 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:00:02.980262 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:00:02.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.982433 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:00:02.986568 kernel: audit: type=1130 audit(1755046802.981:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.990898 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:00:02.992499 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:00:02.994238 systemd[1]: Stopped target timers.target. Aug 13 01:00:02.995781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:00:02.996799 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:00:02.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:02.998506 systemd[1]: Stopped target initrd.target. Aug 13 01:00:03.002823 kernel: audit: type=1131 audit(1755046802.997:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.002889 systemd[1]: Stopped target basic.target. Aug 13 01:00:03.004420 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:00:03.006221 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:00:03.008021 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:00:03.009833 systemd[1]: Stopped target remote-fs.target. Aug 13 01:00:03.011464 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:00:03.013228 systemd[1]: Stopped target sysinit.target. Aug 13 01:00:03.014854 systemd[1]: Stopped target local-fs.target. Aug 13 01:00:03.016442 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:00:03.031856 systemd[1]: Stopped target swap.target. Aug 13 01:00:03.033309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:00:03.034327 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:00:03.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.036018 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:00:03.040300 kernel: audit: type=1131 audit(1755046803.035:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.040350 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:00:03.041378 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:00:03.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.043105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:00:03.046798 kernel: audit: type=1131 audit(1755046803.042:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.043213 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:00:03.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.048539 systemd[1]: Stopped target paths.target. Aug 13 01:00:03.050074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:00:03.054837 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:00:03.056659 systemd[1]: Stopped target slices.target. Aug 13 01:00:03.058219 systemd[1]: Stopped target sockets.target. Aug 13 01:00:03.059741 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:00:03.060645 systemd[1]: Closed iscsid.socket. Aug 13 01:00:03.062072 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:00:03.062996 systemd[1]: Closed iscsiuio.socket. Aug 13 01:00:03.064386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:00:03.065551 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:00:03.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.117563 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:00:03.117649 systemd[1]: Stopped ignition-files.service. Aug 13 01:00:03.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.120790 systemd[1]: Stopping ignition-mount.service... Aug 13 01:00:03.122830 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:00:03.124222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:00:03.125306 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:00:03.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.127061 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:00:03.127185 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:00:03.129860 ignition[883]: INFO : Ignition 2.14.0 Aug 13 01:00:03.129860 ignition[883]: INFO : Stage: umount Aug 13 01:00:03.129860 ignition[883]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:00:03.129860 ignition[883]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 01:00:03.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.132598 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:00:03.136656 ignition[883]: INFO : umount: umount passed Aug 13 01:00:03.136656 ignition[883]: INFO : Ignition finished successfully Aug 13 01:00:03.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.132686 systemd[1]: Stopped ignition-mount.service. Aug 13 01:00:03.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.134005 systemd[1]: Stopped target network.target. Aug 13 01:00:03.135571 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:00:03.135678 systemd[1]: Stopped ignition-disks.service. Aug 13 01:00:03.137299 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:00:03.137398 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:00:03.138907 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:00:03.139032 systemd[1]: Stopped ignition-setup.service. Aug 13 01:00:03.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.140463 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:00:03.140898 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:00:03.143521 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:00:03.146920 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:00:03.147009 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:00:03.151868 systemd-networkd[718]: eth0: DHCPv6 lease lost Aug 13 01:00:03.153604 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:00:03.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.153715 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:00:03.183432 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:00:03.183462 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:00:03.184900 systemd[1]: Stopping network-cleanup.service... Aug 13 01:00:03.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.185623 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:00:03.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.185685 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:00:03.188384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:00:03.191000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:00:03.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.188432 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:00:03.190855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:00:03.190891 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:00:03.192549 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:00:03.195309 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:00:03.199050 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:00:03.199175 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:00:03.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.202024 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:00:03.202166 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:00:03.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.204673 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:00:03.204000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:00:03.205630 systemd[1]: Stopped network-cleanup.service. Aug 13 01:00:03.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.207310 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:00:03.208281 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:00:03.209843 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:00:03.209872 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:00:03.212325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:00:03.213251 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:00:03.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.214741 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:00:03.215676 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:00:03.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.217135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:00:03.217182 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:00:03.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.219673 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:00:03.221289 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:00:03.221340 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 01:00:03.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.259683 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:00:03.259735 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:00:03.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.261440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:00:03.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.261479 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:00:03.263464 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:00:03.270795 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:00:03.270873 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:00:03.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.309111 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:00:03.309236 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:00:03.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.310956 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:00:03.311433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:00:03.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:03.311514 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:00:03.312828 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:00:03.334031 systemd[1]: Switching root. Aug 13 01:00:03.354924 iscsid[723]: iscsid shutting down. Aug 13 01:00:03.355668 systemd-journald[199]: Journal stopped Aug 13 01:00:06.650197 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Aug 13 01:00:06.650239 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:00:06.650263 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:00:06.650275 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:00:06.650285 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:00:06.650295 kernel: SELinux: policy capability open_perms=1 Aug 13 01:00:06.650307 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:00:06.650319 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:00:06.650328 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:00:06.650337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:00:06.650351 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:00:06.650362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:00:06.650372 systemd[1]: Successfully loaded SELinux policy in 38.511ms. Aug 13 01:00:06.650390 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.393ms. Aug 13 01:00:06.650404 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:00:06.650417 systemd[1]: Detected virtualization kvm. Aug 13 01:00:06.650430 systemd[1]: Detected architecture x86-64. Aug 13 01:00:06.650442 systemd[1]: Detected first boot. Aug 13 01:00:06.650456 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:00:06.650472 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:00:06.650485 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:00:06.650498 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:00:06.650513 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:00:06.650536 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:00:06.650553 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 01:00:06.650563 systemd[1]: Stopped iscsiuio.service. Aug 13 01:00:06.650573 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 01:00:06.650583 systemd[1]: Stopped iscsid.service. Aug 13 01:00:06.650593 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:00:06.650604 systemd[1]: Stopped initrd-switch-root.service. Aug 13 01:00:06.650614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:00:06.650624 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:00:06.650648 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:00:06.650658 systemd[1]: Created slice system-getty.slice. Aug 13 01:00:06.650669 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:00:06.650680 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:00:06.650690 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:00:06.650701 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:00:06.650711 systemd[1]: Created slice user.slice. Aug 13 01:00:06.650721 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:00:06.650732 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:00:06.650743 systemd[1]: Set up automount boot.automount. Aug 13 01:00:06.650754 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:00:06.650764 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 01:00:06.650832 systemd[1]: Stopped target initrd-fs.target. Aug 13 01:00:06.650843 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 01:00:06.650853 systemd[1]: Reached target integritysetup.target. Aug 13 01:00:06.650863 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:00:06.650874 systemd[1]: Reached target remote-fs.target. Aug 13 01:00:06.650886 systemd[1]: Reached target slices.target. Aug 13 01:00:06.650897 systemd[1]: Reached target swap.target. Aug 13 01:00:06.650907 systemd[1]: Reached target torcx.target. Aug 13 01:00:06.650917 systemd[1]: Reached target veritysetup.target. Aug 13 01:00:06.650927 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:00:06.650938 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:00:06.650948 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:00:06.650959 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:00:06.650969 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:00:06.650979 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:00:06.650995 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:00:06.651005 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:00:06.651016 systemd[1]: Mounting media.mount... Aug 13 01:00:06.651026 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:06.651036 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:00:06.651046 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:00:06.651056 systemd[1]: Mounting tmp.mount... Aug 13 01:00:06.651068 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:00:06.651079 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:00:06.651094 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:00:06.651112 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:00:06.651123 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:00:06.651133 systemd[1]: Starting modprobe@drm.service... Aug 13 01:00:06.651144 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:00:06.651154 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:00:06.651164 systemd[1]: Starting modprobe@loop.service... Aug 13 01:00:06.651175 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:00:06.651185 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:00:06.651200 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 01:00:06.651210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:00:06.651221 kernel: loop: module loaded Aug 13 01:00:06.651231 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:00:06.651241 systemd[1]: Stopped systemd-journald.service. Aug 13 01:00:06.651251 kernel: fuse: init (API version 7.34) Aug 13 01:00:06.651261 systemd[1]: Starting systemd-journald.service... Aug 13 01:00:06.651271 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:00:06.651281 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:00:06.651299 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:00:06.651313 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:00:06.651326 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:00:06.651340 systemd[1]: Stopped verity-setup.service. Aug 13 01:00:06.651361 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:06.651379 systemd-journald[1003]: Journal started Aug 13 01:00:06.651424 systemd-journald[1003]: Runtime Journal (/run/log/journal/b060213bf4ba446cb9402f2ab1f981a2) is 6.0M, max 48.5M, 42.5M free. Aug 13 01:00:03.413000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:00:04.339000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:00:04.339000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:00:04.339000 audit: BPF prog-id=10 op=LOAD Aug 13 01:00:04.339000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:00:04.339000 audit: BPF prog-id=11 op=LOAD Aug 13 01:00:04.339000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:00:04.371000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 01:00:04.371000 audit[917]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:00:04.371000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:00:04.373000 audit[917]: AVC avc: denied { associate } for pid=917 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 01:00:04.373000 audit[917]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155999 a2=1ed a3=0 items=2 ppid=900 pid=917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:00:04.373000 audit: CWD cwd="/" Aug 13 01:00:04.373000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:04.373000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:04.373000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:00:06.506000 audit: BPF prog-id=12 op=LOAD Aug 13 01:00:06.506000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:00:06.506000 audit: BPF prog-id=13 op=LOAD Aug 13 01:00:06.506000 audit: BPF prog-id=14 op=LOAD Aug 13 01:00:06.506000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:00:06.506000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:00:06.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.521000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:00:06.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.631000 audit: BPF prog-id=15 op=LOAD Aug 13 01:00:06.631000 audit: BPF prog-id=16 op=LOAD Aug 13 01:00:06.631000 audit: BPF prog-id=17 op=LOAD Aug 13 01:00:06.631000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:00:06.631000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:00:06.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.648000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:00:06.648000 audit[1003]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd55086dc0 a2=4000 a3=7ffd55086e5c items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:00:06.648000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:00:06.505202 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:00:04.371064 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:00:06.505213 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 01:00:04.371287 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:00:06.508626 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:00:04.371303 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:00:04.371332 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 01:00:04.371340 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 01:00:04.371367 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 01:00:04.371378 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 01:00:04.371565 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 01:00:04.371601 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:00:04.371614 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:00:04.372144 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 01:00:06.653794 systemd[1]: Started systemd-journald.service. Aug 13 01:00:04.372181 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 01:00:04.372201 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 01:00:06.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:04.372217 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 01:00:04.372235 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 01:00:04.372248 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 01:00:06.216320 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:00:06.216593 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:00:06.216696 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:00:06.654420 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:00:06.216863 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:00:06.216911 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 01:00:06.216965 /usr/lib/systemd/system-generators/torcx-generator[917]: time="2025-08-13T01:00:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 01:00:06.655313 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:00:06.656095 systemd[1]: Mounted media.mount. Aug 13 01:00:06.656886 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:00:06.657737 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:00:06.658589 systemd[1]: Mounted tmp.mount. Aug 13 01:00:06.659496 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:00:06.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.660616 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:00:06.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.661675 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:00:06.661881 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:00:06.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.663002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:00:06.663147 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:00:06.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.664155 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:00:06.664289 systemd[1]: Finished modprobe@drm.service. Aug 13 01:00:06.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.665281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:00:06.665456 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:00:06.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.666499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:00:06.666679 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:00:06.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.667681 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:00:06.667848 systemd[1]: Finished modprobe@loop.service. Aug 13 01:00:06.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.669011 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:00:06.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.670154 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:00:06.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.671314 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:00:06.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.672571 systemd[1]: Reached target network-pre.target. Aug 13 01:00:06.674429 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:00:06.676224 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:00:06.677042 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:00:06.678682 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:00:06.680511 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:00:06.681357 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:00:06.682499 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:00:06.683335 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:00:06.688385 systemd-journald[1003]: Time spent on flushing to /var/log/journal/b060213bf4ba446cb9402f2ab1f981a2 is 14.202ms for 1094 entries. Aug 13 01:00:06.688385 systemd-journald[1003]: System Journal (/var/log/journal/b060213bf4ba446cb9402f2ab1f981a2) is 8.0M, max 195.6M, 187.6M free. Aug 13 01:00:06.773934 systemd-journald[1003]: Received client request to flush runtime journal. Aug 13 01:00:06.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.684350 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:00:06.687050 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:00:06.690035 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:00:06.691060 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:00:06.775238 udevadm[1021]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 01:00:06.703164 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:00:06.705196 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:00:06.742266 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:00:06.746544 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:00:06.747991 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:00:06.751966 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:00:06.754659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:00:06.775009 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:00:06.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:06.781596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:00:06.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.342751 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:00:07.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.343000 audit: BPF prog-id=18 op=LOAD Aug 13 01:00:07.343000 audit: BPF prog-id=19 op=LOAD Aug 13 01:00:07.343000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:00:07.343000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:00:07.345042 systemd[1]: Starting systemd-udevd.service... Aug 13 01:00:07.361371 systemd-udevd[1026]: Using default interface naming scheme 'v252'. Aug 13 01:00:07.375108 systemd[1]: Started systemd-udevd.service. Aug 13 01:00:07.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.377000 audit: BPF prog-id=20 op=LOAD Aug 13 01:00:07.379327 systemd[1]: Starting systemd-networkd.service... Aug 13 01:00:07.382000 audit: BPF prog-id=21 op=LOAD Aug 13 01:00:07.382000 audit: BPF prog-id=22 op=LOAD Aug 13 01:00:07.383000 audit: BPF prog-id=23 op=LOAD Aug 13 01:00:07.385080 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:00:07.406982 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 01:00:07.421218 systemd[1]: Started systemd-userdbd.service. Aug 13 01:00:07.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.437570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:00:07.449802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:00:07.454798 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:00:07.471000 audit[1050]: AVC avc: denied { confidentiality } for pid=1050 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:00:07.471000 audit[1050]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b50569f9d0 a1=338ac a2=7f2dedb6fbc5 a3=5 items=110 ppid=1026 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:00:07.471000 audit: CWD cwd="/" Aug 13 01:00:07.471000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=1 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=2 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=3 name=(null) inode=15430 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=4 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=5 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=6 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=7 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=8 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=9 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=10 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=11 name=(null) inode=15434 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=12 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=13 name=(null) inode=15435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=14 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=15 name=(null) inode=15436 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=16 name=(null) inode=15432 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=17 name=(null) inode=15437 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=18 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=19 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=20 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=21 name=(null) inode=15439 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=22 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=23 name=(null) inode=15440 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=24 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=25 name=(null) inode=15441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=26 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=27 name=(null) inode=15442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=28 name=(null) inode=15438 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=29 name=(null) inode=15443 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=30 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=31 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=32 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=33 name=(null) inode=15445 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=34 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=35 name=(null) inode=15446 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=36 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=37 name=(null) inode=15447 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=38 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=39 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=40 name=(null) inode=15444 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=41 name=(null) inode=15449 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=42 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=43 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=44 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=45 name=(null) inode=15451 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=46 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=47 name=(null) inode=15452 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=48 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=49 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=50 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=51 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=52 name=(null) inode=15450 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=53 name=(null) inode=15455 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=55 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=56 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=57 name=(null) inode=15457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=58 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=59 name=(null) inode=15458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=60 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=61 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=62 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=63 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=64 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=65 name=(null) inode=15461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=66 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=67 name=(null) inode=15462 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=68 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=69 name=(null) inode=15463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=70 name=(null) inode=15459 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=71 name=(null) inode=15464 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=72 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=73 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=74 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=75 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=76 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=77 name=(null) inode=15467 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=78 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=79 name=(null) inode=15468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=80 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=81 name=(null) inode=15469 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=82 name=(null) inode=15465 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=83 name=(null) inode=15470 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=84 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=85 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=86 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=87 name=(null) inode=15472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=88 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=89 name=(null) inode=15473 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=90 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=91 name=(null) inode=15474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=92 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=93 name=(null) inode=15475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=94 name=(null) inode=15471 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=95 name=(null) inode=15476 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=96 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=97 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=98 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=99 name=(null) inode=15478 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=100 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=101 name=(null) inode=15479 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=102 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=103 name=(null) inode=15480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=104 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=105 name=(null) inode=15481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=106 name=(null) inode=15477 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=107 name=(null) inode=15482 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PATH item=109 name=(null) inode=15483 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:00:07.471000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:00:07.482794 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:00:07.490980 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 01:00:07.491174 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:00:07.488092 systemd-networkd[1034]: lo: Link UP Aug 13 01:00:07.488099 systemd-networkd[1034]: lo: Gained carrier Aug 13 01:00:07.488718 systemd-networkd[1034]: Enumeration completed Aug 13 01:00:07.488884 systemd[1]: Started systemd-networkd.service. Aug 13 01:00:07.489026 systemd-networkd[1034]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:00:07.490281 systemd-networkd[1034]: eth0: Link UP Aug 13 01:00:07.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.490285 systemd-networkd[1034]: eth0: Gained carrier Aug 13 01:00:07.503895 systemd-networkd[1034]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 01:00:07.519821 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:00:07.524793 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:00:07.579061 kernel: kvm: Nested Virtualization enabled Aug 13 01:00:07.579156 kernel: SVM: kvm: Nested Paging enabled Aug 13 01:00:07.579171 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 01:00:07.580293 kernel: SVM: Virtual GIF supported Aug 13 01:00:07.595802 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:00:07.622204 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:00:07.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.624542 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:00:07.632470 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:00:07.657737 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:00:07.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.658903 systemd[1]: Reached target cryptsetup.target. Aug 13 01:00:07.660915 systemd[1]: Starting lvm2-activation.service... Aug 13 01:00:07.665249 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:00:07.698882 systemd[1]: Finished lvm2-activation.service. Aug 13 01:00:07.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.699949 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:00:07.700858 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:00:07.700885 systemd[1]: Reached target local-fs.target. Aug 13 01:00:07.701761 systemd[1]: Reached target machines.target. Aug 13 01:00:07.704110 systemd[1]: Starting ldconfig.service... Aug 13 01:00:07.705214 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:00:07.705278 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:07.706318 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:00:07.708365 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:00:07.710859 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:00:07.712920 systemd[1]: Starting systemd-sysext.service... Aug 13 01:00:07.714383 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) Aug 13 01:00:07.715525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:00:07.722481 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:00:07.728935 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:00:07.729135 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:00:07.739809 kernel: loop0: detected capacity change from 0 to 229808 Aug 13 01:00:07.763668 systemd-fsck[1073]: fsck.fat 4.2 (2021-01-31) Aug 13 01:00:07.763668 systemd-fsck[1073]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 01:00:07.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:07.765440 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:00:07.767691 systemd[1]: Mounting boot.mount... Aug 13 01:00:07.769658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:00:08.154812 systemd[1]: Mounted boot.mount. Aug 13 01:00:08.166811 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:00:08.170937 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:00:08.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.173313 kernel: kauditd_printk_skb: 226 callbacks suppressed Aug 13 01:00:08.173372 kernel: audit: type=1130 audit(1755046808.171:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.178477 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:00:08.179183 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:00:08.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.183880 kernel: audit: type=1130 audit(1755046808.179:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.189785 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 01:00:08.195613 (sd-sysext)[1079]: Using extensions 'kubernetes'. Aug 13 01:00:08.196030 (sd-sysext)[1079]: Merged extensions into '/usr'. Aug 13 01:00:08.214840 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:08.218510 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:00:08.219668 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.221580 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:00:08.224108 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:00:08.226713 systemd[1]: Starting modprobe@loop.service... Aug 13 01:00:08.227705 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.228082 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.228230 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:08.231054 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:00:08.232202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:00:08.232323 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:00:08.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.233499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:00:08.233622 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:00:08.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.236799 kernel: audit: type=1130 audit(1755046808.232:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.236839 kernel: audit: type=1131 audit(1755046808.232:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.241278 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:00:08.241390 systemd[1]: Finished modprobe@loop.service. Aug 13 01:00:08.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.244812 kernel: audit: type=1130 audit(1755046808.240:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.244842 kernel: audit: type=1131 audit(1755046808.240:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.248958 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:00:08.249064 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.250403 systemd[1]: Finished systemd-sysext.service. Aug 13 01:00:08.250515 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:00:08.251802 kernel: audit: type=1130 audit(1755046808.247:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.251865 kernel: audit: type=1131 audit(1755046808.247:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.256207 systemd[1]: Finished ldconfig.service. Aug 13 01:00:08.259796 kernel: audit: type=1130 audit(1755046808.255:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.259843 kernel: audit: type=1130 audit(1755046808.258:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.261180 systemd[1]: Starting ensure-sysext.service... Aug 13 01:00:08.264430 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:00:08.270296 systemd[1]: Reloading. Aug 13 01:00:08.276108 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:00:08.276972 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:00:08.278495 systemd-tmpfiles[1086]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:00:08.331851 /usr/lib/systemd/system-generators/torcx-generator[1106]: time="2025-08-13T01:00:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:00:08.331878 /usr/lib/systemd/system-generators/torcx-generator[1106]: time="2025-08-13T01:00:08Z" level=info msg="torcx already run" Aug 13 01:00:08.404253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:00:08.404272 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:00:08.422953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:00:08.476000 audit: BPF prog-id=24 op=LOAD Aug 13 01:00:08.476000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:00:08.476000 audit: BPF prog-id=25 op=LOAD Aug 13 01:00:08.476000 audit: BPF prog-id=26 op=LOAD Aug 13 01:00:08.476000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:00:08.476000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:00:08.478000 audit: BPF prog-id=27 op=LOAD Aug 13 01:00:08.478000 audit: BPF prog-id=28 op=LOAD Aug 13 01:00:08.478000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:00:08.478000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:00:08.479000 audit: BPF prog-id=29 op=LOAD Aug 13 01:00:08.479000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:00:08.481000 audit: BPF prog-id=30 op=LOAD Aug 13 01:00:08.481000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:00:08.481000 audit: BPF prog-id=31 op=LOAD Aug 13 01:00:08.481000 audit: BPF prog-id=32 op=LOAD Aug 13 01:00:08.481000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:00:08.481000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:00:08.484338 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:00:08.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.489130 systemd[1]: Starting audit-rules.service... Aug 13 01:00:08.491015 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:00:08.493028 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:00:08.494000 audit: BPF prog-id=33 op=LOAD Aug 13 01:00:08.495920 systemd[1]: Starting systemd-resolved.service... Aug 13 01:00:08.496000 audit: BPF prog-id=34 op=LOAD Aug 13 01:00:08.498278 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:00:08.500127 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:00:08.501536 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:00:08.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.503000 audit[1159]: SYSTEM_BOOT pid=1159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.504523 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:00:08.507307 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:00:08.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.510736 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.512085 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:00:08.514007 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:00:08.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.515926 systemd[1]: Starting modprobe@loop.service... Aug 13 01:00:08.516802 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.516928 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.517025 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:00:08.517780 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:00:08.519263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:00:08.519364 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:00:08.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.520648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:00:08.520754 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:00:08.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.522212 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:00:08.522317 systemd[1]: Finished modprobe@loop.service. Aug 13 01:00:08.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:08.524571 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.524000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:00:08.524000 audit[1171]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc02fe75a0 a2=420 a3=0 items=0 ppid=1148 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:00:08.524000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:00:08.525228 augenrules[1171]: No rules Aug 13 01:00:08.525841 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:00:08.527627 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:00:08.529429 systemd[1]: Starting modprobe@loop.service... Aug 13 01:00:08.530195 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.530291 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.531463 systemd[1]: Starting systemd-update-done.service... Aug 13 01:00:08.532321 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:00:08.533327 systemd[1]: Finished audit-rules.service. Aug 13 01:00:08.534458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:00:08.534562 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:00:08.535740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:00:08.535856 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:00:08.537040 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:00:08.537139 systemd[1]: Finished modprobe@loop.service. Aug 13 01:00:08.538324 systemd[1]: Finished systemd-update-done.service. Aug 13 01:00:08.539719 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:00:08.539826 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.542122 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.543240 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:00:08.545290 systemd[1]: Starting modprobe@drm.service... Aug 13 01:00:08.547455 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:00:08.549633 systemd[1]: Starting modprobe@loop.service... Aug 13 01:00:08.550601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.550743 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.552446 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:00:08.553562 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:00:08.554727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:00:08.555158 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:00:08.556547 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:00:08.556694 systemd[1]: Finished modprobe@drm.service. Aug 13 01:00:08.557955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:00:08.558074 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:00:08.559394 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:00:08.559509 systemd[1]: Finished modprobe@loop.service. Aug 13 01:00:08.561900 systemd[1]: Finished ensure-sysext.service. Aug 13 01:00:08.563557 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:00:08.563606 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.564197 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:00:08.565445 systemd[1]: Reached target time-set.target. Aug 13 01:00:08.565468 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 01:00:08.565506 systemd-timesyncd[1156]: Initial clock synchronization to Wed 2025-08-13 01:00:08.817292 UTC. Aug 13 01:00:08.569960 systemd-resolved[1154]: Positive Trust Anchors: Aug 13 01:00:08.570171 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:00:08.570270 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:00:08.576806 systemd-resolved[1154]: Defaulting to hostname 'linux'. Aug 13 01:00:08.578282 systemd[1]: Started systemd-resolved.service. Aug 13 01:00:08.579151 systemd[1]: Reached target network.target. Aug 13 01:00:08.579935 systemd[1]: Reached target nss-lookup.target. Aug 13 01:00:08.580741 systemd[1]: Reached target sysinit.target. Aug 13 01:00:08.581643 systemd[1]: Started motdgen.path. Aug 13 01:00:08.582373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:00:08.583598 systemd[1]: Started logrotate.timer. Aug 13 01:00:08.584405 systemd[1]: Started mdadm.timer. Aug 13 01:00:08.585110 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:00:08.585965 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:00:08.585988 systemd[1]: Reached target paths.target. Aug 13 01:00:08.586734 systemd[1]: Reached target timers.target. Aug 13 01:00:08.587878 systemd[1]: Listening on dbus.socket. Aug 13 01:00:08.589609 systemd[1]: Starting docker.socket... Aug 13 01:00:08.592750 systemd[1]: Listening on sshd.socket. Aug 13 01:00:08.593649 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.594008 systemd[1]: Listening on docker.socket. Aug 13 01:00:08.594843 systemd[1]: Reached target sockets.target. Aug 13 01:00:08.595661 systemd[1]: Reached target basic.target. Aug 13 01:00:08.596470 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.596493 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:00:08.597461 systemd[1]: Starting containerd.service... Aug 13 01:00:08.599152 systemd[1]: Starting dbus.service... Aug 13 01:00:08.600741 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:00:08.602790 systemd[1]: Starting extend-filesystems.service... Aug 13 01:00:08.604403 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:00:08.604480 jq[1190]: false Aug 13 01:00:08.607899 systemd[1]: Starting motdgen.service... Aug 13 01:00:08.609642 systemd[1]: Starting prepare-helm.service... Aug 13 01:00:08.611484 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:00:08.613987 systemd[1]: Starting sshd-keygen.service... Aug 13 01:00:08.615398 dbus-daemon[1189]: [system] SELinux support is enabled Aug 13 01:00:08.619482 systemd[1]: Starting systemd-logind.service... Aug 13 01:00:08.620528 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:00:08.620697 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:00:08.621355 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:00:08.622455 systemd[1]: Starting update-engine.service... Aug 13 01:00:08.624401 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:00:08.626186 systemd[1]: Started dbus.service. Aug 13 01:00:08.627299 jq[1208]: true Aug 13 01:00:08.630085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:00:08.630295 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:00:08.631206 extend-filesystems[1191]: Found loop1 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found sr0 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda1 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda2 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda3 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found usr Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda4 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda6 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda7 Aug 13 01:00:08.633231 extend-filesystems[1191]: Found vda9 Aug 13 01:00:08.633231 extend-filesystems[1191]: Checking size of /dev/vda9 Aug 13 01:00:08.631208 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:00:08.662402 extend-filesystems[1191]: Resized partition /dev/vda9 Aug 13 01:00:08.631366 systemd[1]: Finished motdgen.service. Aug 13 01:00:08.663490 tar[1213]: linux-amd64/LICENSE Aug 13 01:00:08.663490 tar[1213]: linux-amd64/helm Aug 13 01:00:08.639758 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:00:08.663931 jq[1214]: true Aug 13 01:00:08.640931 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:00:08.649076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:00:08.649121 systemd[1]: Reached target system-config.target. Aug 13 01:00:08.651436 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:00:08.651455 systemd[1]: Reached target user-config.target. Aug 13 01:00:08.668821 update_engine[1206]: I0813 01:00:08.665035 1206 main.cc:92] Flatcar Update Engine starting Aug 13 01:00:08.668821 update_engine[1206]: I0813 01:00:08.667346 1206 update_check_scheduler.cc:74] Next update check in 3m19s Aug 13 01:00:08.667271 systemd[1]: Started update-engine.service. Aug 13 01:00:08.678943 extend-filesystems[1221]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 01:00:08.678315 systemd[1]: Started locksmithd.service. Aug 13 01:00:08.685797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 01:00:08.707930 systemd-logind[1204]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:00:08.707958 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:00:08.708195 systemd-logind[1204]: New seat seat0. Aug 13 01:00:08.709919 systemd[1]: Started systemd-logind.service. Aug 13 01:00:08.726815 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 01:00:08.729008 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:08.729067 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:00:08.749523 env[1216]: time="2025-08-13T01:00:08.749467298Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:00:08.750928 extend-filesystems[1221]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 01:00:08.750928 extend-filesystems[1221]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:00:08.750928 extend-filesystems[1221]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 01:00:08.755078 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Aug 13 01:00:08.757477 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:00:08.757642 systemd[1]: Finished extend-filesystems.service. Aug 13 01:00:08.759663 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:00:08.759417 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:00:08.767013 env[1216]: time="2025-08-13T01:00:08.766960728Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:00:08.767108 env[1216]: time="2025-08-13T01:00:08.767080002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769326 env[1216]: time="2025-08-13T01:00:08.769279095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769326 env[1216]: time="2025-08-13T01:00:08.769322627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769790 env[1216]: time="2025-08-13T01:00:08.769606209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769790 env[1216]: time="2025-08-13T01:00:08.769623060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769790 env[1216]: time="2025-08-13T01:00:08.769653167Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:00:08.769790 env[1216]: time="2025-08-13T01:00:08.769664037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.769790 env[1216]: time="2025-08-13T01:00:08.769743125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.770024 env[1216]: time="2025-08-13T01:00:08.769994898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:00:08.770150 env[1216]: time="2025-08-13T01:00:08.770121275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:00:08.770197 env[1216]: time="2025-08-13T01:00:08.770153645Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:00:08.770257 env[1216]: time="2025-08-13T01:00:08.770231421Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:00:08.770257 env[1216]: time="2025-08-13T01:00:08.770253603Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:00:08.775355 env[1216]: time="2025-08-13T01:00:08.775328789Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:00:08.775404 env[1216]: time="2025-08-13T01:00:08.775356641Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:00:08.775404 env[1216]: time="2025-08-13T01:00:08.775370086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:00:08.775404 env[1216]: time="2025-08-13T01:00:08.775397918Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775412766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775427083Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775586251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775607872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775634181Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775649129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775662905Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775677663Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:00:08.775965 env[1216]: time="2025-08-13T01:00:08.775925177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:00:08.776138 env[1216]: time="2025-08-13T01:00:08.776007401Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:00:08.776244 env[1216]: time="2025-08-13T01:00:08.776225069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:00:08.776303 env[1216]: time="2025-08-13T01:00:08.776288849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776330 env[1216]: time="2025-08-13T01:00:08.776305510Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:00:08.776363 env[1216]: time="2025-08-13T01:00:08.776350474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776387 env[1216]: time="2025-08-13T01:00:08.776365863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776413 env[1216]: time="2025-08-13T01:00:08.776389878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776413 env[1216]: time="2025-08-13T01:00:08.776405898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776466 env[1216]: time="2025-08-13T01:00:08.776420395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776466 env[1216]: time="2025-08-13T01:00:08.776435814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776466 env[1216]: time="2025-08-13T01:00:08.776450512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776466 env[1216]: time="2025-08-13T01:00:08.776463466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776561 env[1216]: time="2025-08-13T01:00:08.776479166Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:00:08.776655 env[1216]: time="2025-08-13T01:00:08.776637062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776681 env[1216]: time="2025-08-13T01:00:08.776665836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776704 env[1216]: time="2025-08-13T01:00:08.776679742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.776704 env[1216]: time="2025-08-13T01:00:08.776692135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:00:08.776745 env[1216]: time="2025-08-13T01:00:08.776707333Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:00:08.776745 env[1216]: time="2025-08-13T01:00:08.776718424Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:00:08.776745 env[1216]: time="2025-08-13T01:00:08.776736769Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:00:08.776842 env[1216]: time="2025-08-13T01:00:08.776826938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:00:08.777140 env[1216]: time="2025-08-13T01:00:08.777084260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:00:08.777140 env[1216]: time="2025-08-13T01:00:08.777139624Z" level=info msg="Connect containerd service" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777168448Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777784814Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777888739Z" level=info msg="Start subscribing containerd event" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777926389Z" level=info msg="Start recovering state" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777972015Z" level=info msg="Start event monitor" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777981262Z" level=info msg="Start snapshots syncer" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777988996Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:00:08.778091 env[1216]: time="2025-08-13T01:00:08.777995769Z" level=info msg="Start streaming server" Aug 13 01:00:08.778350 env[1216]: time="2025-08-13T01:00:08.778331980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:00:08.778381 env[1216]: time="2025-08-13T01:00:08.778368218Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:00:08.778972 env[1216]: time="2025-08-13T01:00:08.778412711Z" level=info msg="containerd successfully booted in 0.048306s" Aug 13 01:00:08.778492 systemd[1]: Started containerd.service. Aug 13 01:00:08.782417 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:00:08.889880 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:00:08.910259 systemd[1]: Finished sshd-keygen.service. Aug 13 01:00:08.912593 systemd[1]: Starting issuegen.service... Aug 13 01:00:08.918161 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:00:08.918289 systemd[1]: Finished issuegen.service. Aug 13 01:00:08.920624 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:00:08.927676 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:00:08.929936 systemd[1]: Started getty@tty1.service. Aug 13 01:00:08.931909 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:00:08.933002 systemd[1]: Reached target getty.target. Aug 13 01:00:09.057934 systemd-networkd[1034]: eth0: Gained IPv6LL Aug 13 01:00:09.060446 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:00:09.061923 systemd[1]: Reached target network-online.target. Aug 13 01:00:09.064468 systemd[1]: Starting kubelet.service... Aug 13 01:00:09.190875 tar[1213]: linux-amd64/README.md Aug 13 01:00:09.195272 systemd[1]: Finished prepare-helm.service. Aug 13 01:00:09.818639 systemd[1]: Started kubelet.service. Aug 13 01:00:09.820331 systemd[1]: Reached target multi-user.target. Aug 13 01:00:09.823007 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:00:09.831085 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:00:09.831303 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:00:09.832751 systemd[1]: Startup finished in 962ms (kernel) + 6.355s (initrd) + 6.458s (userspace) = 13.776s. Aug 13 01:00:10.257054 kubelet[1270]: E0813 01:00:10.256891 1270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:00:10.258894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:00:10.259061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:00:10.259358 systemd[1]: kubelet.service: Consumed 1.042s CPU time. Aug 13 01:00:11.165628 systemd[1]: Created slice system-sshd.slice. Aug 13 01:00:11.166884 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:41374.service. Aug 13 01:00:11.199319 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 41374 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:00:11.200696 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.207847 systemd[1]: Created slice user-500.slice. Aug 13 01:00:11.209155 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:00:11.210838 systemd-logind[1204]: New session 1 of user core. Aug 13 01:00:11.217415 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:00:11.218937 systemd[1]: Starting user@500.service... Aug 13 01:00:11.221758 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.289417 systemd[1283]: Queued start job for default target default.target. Aug 13 01:00:11.289997 systemd[1283]: Reached target paths.target. Aug 13 01:00:11.290020 systemd[1283]: Reached target sockets.target. Aug 13 01:00:11.290037 systemd[1283]: Reached target timers.target. Aug 13 01:00:11.290052 systemd[1283]: Reached target basic.target. Aug 13 01:00:11.290099 systemd[1283]: Reached target default.target. Aug 13 01:00:11.290132 systemd[1283]: Startup finished in 63ms. Aug 13 01:00:11.290207 systemd[1]: Started user@500.service. Aug 13 01:00:11.291333 systemd[1]: Started session-1.scope. Aug 13 01:00:11.344460 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:41390.service. Aug 13 01:00:11.376680 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 41390 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:00:11.377859 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.381214 systemd-logind[1204]: New session 2 of user core. Aug 13 01:00:11.382468 systemd[1]: Started session-2.scope. Aug 13 01:00:11.435830 sshd[1292]: pam_unix(sshd:session): session closed for user core Aug 13 01:00:11.438509 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:41390.service: Deactivated successfully. Aug 13 01:00:11.439081 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:00:11.439562 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:00:11.440641 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:41396.service. Aug 13 01:00:11.441469 systemd-logind[1204]: Removed session 2. Aug 13 01:00:11.469976 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 41396 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:00:11.471032 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.474054 systemd-logind[1204]: New session 3 of user core. Aug 13 01:00:11.474745 systemd[1]: Started session-3.scope. Aug 13 01:00:11.525424 sshd[1298]: pam_unix(sshd:session): session closed for user core Aug 13 01:00:11.528437 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:41396.service: Deactivated successfully. Aug 13 01:00:11.529063 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:00:11.529599 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:00:11.530764 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:41402.service. Aug 13 01:00:11.531512 systemd-logind[1204]: Removed session 3. Aug 13 01:00:11.559828 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 41402 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:00:11.561050 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.564889 systemd-logind[1204]: New session 4 of user core. Aug 13 01:00:11.565651 systemd[1]: Started session-4.scope. Aug 13 01:00:11.619951 sshd[1305]: pam_unix(sshd:session): session closed for user core Aug 13 01:00:11.622698 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:41402.service: Deactivated successfully. Aug 13 01:00:11.623250 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:00:11.623725 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:00:11.624912 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:41404.service. Aug 13 01:00:11.625835 systemd-logind[1204]: Removed session 4. Aug 13 01:00:11.653880 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:00:11.655343 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:00:11.659590 systemd-logind[1204]: New session 5 of user core. Aug 13 01:00:11.660490 systemd[1]: Started session-5.scope. Aug 13 01:00:11.717825 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:00:11.718027 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:00:11.739899 systemd[1]: Starting docker.service... Aug 13 01:00:11.775875 env[1325]: time="2025-08-13T01:00:11.775790069Z" level=info msg="Starting up" Aug 13 01:00:11.777176 env[1325]: time="2025-08-13T01:00:11.777141423Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:00:11.777176 env[1325]: time="2025-08-13T01:00:11.777158363Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:00:11.777176 env[1325]: time="2025-08-13T01:00:11.777174605Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:00:11.777176 env[1325]: time="2025-08-13T01:00:11.777185040Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:00:11.778743 env[1325]: time="2025-08-13T01:00:11.778718494Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:00:11.778743 env[1325]: time="2025-08-13T01:00:11.778735341Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:00:11.778838 env[1325]: time="2025-08-13T01:00:11.778747725Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:00:11.778838 env[1325]: time="2025-08-13T01:00:11.778757206Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:00:12.617196 env[1325]: time="2025-08-13T01:00:12.617129105Z" level=info msg="Loading containers: start." Aug 13 01:00:12.758840 kernel: Initializing XFRM netlink socket Aug 13 01:00:12.791804 env[1325]: time="2025-08-13T01:00:12.791730409Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:00:12.849723 systemd-networkd[1034]: docker0: Link UP Aug 13 01:00:13.132420 env[1325]: time="2025-08-13T01:00:13.132349772Z" level=info msg="Loading containers: done." Aug 13 01:00:13.180058 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3167161915-merged.mount: Deactivated successfully. Aug 13 01:00:13.182583 env[1325]: time="2025-08-13T01:00:13.182533752Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:00:13.182754 env[1325]: time="2025-08-13T01:00:13.182718715Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:00:13.182859 env[1325]: time="2025-08-13T01:00:13.182842741Z" level=info msg="Daemon has completed initialization" Aug 13 01:00:13.201637 systemd[1]: Started docker.service. Aug 13 01:00:13.205924 env[1325]: time="2025-08-13T01:00:13.205859533Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:00:14.001926 env[1216]: time="2025-08-13T01:00:14.001867201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 01:00:14.752196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389426562.mount: Deactivated successfully. Aug 13 01:00:17.398748 env[1216]: time="2025-08-13T01:00:17.398663823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:17.668677 env[1216]: time="2025-08-13T01:00:17.668423389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:17.719417 env[1216]: time="2025-08-13T01:00:17.719365470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:17.741461 env[1216]: time="2025-08-13T01:00:17.741423478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:17.742432 env[1216]: time="2025-08-13T01:00:17.742394199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 01:00:17.743466 env[1216]: time="2025-08-13T01:00:17.743401609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 01:00:20.274318 env[1216]: time="2025-08-13T01:00:20.274243687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:20.276546 env[1216]: time="2025-08-13T01:00:20.276463279Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:20.278584 env[1216]: time="2025-08-13T01:00:20.278535852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:20.280531 env[1216]: time="2025-08-13T01:00:20.280483735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:20.281391 env[1216]: time="2025-08-13T01:00:20.281336498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 01:00:20.282133 env[1216]: time="2025-08-13T01:00:20.282081455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 01:00:20.510010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:00:20.510306 systemd[1]: Stopped kubelet.service. Aug 13 01:00:20.510363 systemd[1]: kubelet.service: Consumed 1.042s CPU time. Aug 13 01:00:20.512242 systemd[1]: Starting kubelet.service... Aug 13 01:00:20.624838 systemd[1]: Started kubelet.service. Aug 13 01:00:20.989050 kubelet[1460]: E0813 01:00:20.988872 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:00:20.992687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:00:20.992848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:00:24.272359 env[1216]: time="2025-08-13T01:00:24.272259065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:24.279703 env[1216]: time="2025-08-13T01:00:24.279653313Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:24.282877 env[1216]: time="2025-08-13T01:00:24.282837228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:24.285852 env[1216]: time="2025-08-13T01:00:24.285792437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:24.286997 env[1216]: time="2025-08-13T01:00:24.286949231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 01:00:24.287799 env[1216]: time="2025-08-13T01:00:24.287750024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 01:00:25.673101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083553084.mount: Deactivated successfully. Aug 13 01:00:26.759945 env[1216]: time="2025-08-13T01:00:26.759880953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:26.819123 env[1216]: time="2025-08-13T01:00:26.819026226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:26.917505 env[1216]: time="2025-08-13T01:00:26.917438982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:26.934276 env[1216]: time="2025-08-13T01:00:26.934206666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:26.934865 env[1216]: time="2025-08-13T01:00:26.934826795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 01:00:26.935444 env[1216]: time="2025-08-13T01:00:26.935385482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:00:29.735218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165352161.mount: Deactivated successfully. Aug 13 01:00:31.112983 env[1216]: time="2025-08-13T01:00:31.112904082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:31.116353 env[1216]: time="2025-08-13T01:00:31.116310522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:31.118468 env[1216]: time="2025-08-13T01:00:31.118430622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:31.120946 env[1216]: time="2025-08-13T01:00:31.120898692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:31.121869 env[1216]: time="2025-08-13T01:00:31.121821347Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:00:31.122483 env[1216]: time="2025-08-13T01:00:31.122458078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:00:31.243910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:00:31.244176 systemd[1]: Stopped kubelet.service. Aug 13 01:00:31.246185 systemd[1]: Starting kubelet.service... Aug 13 01:00:31.340449 systemd[1]: Started kubelet.service. Aug 13 01:00:31.671447 kubelet[1471]: E0813 01:00:31.671386 1471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:00:31.673402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:00:31.673539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:00:32.312447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621471773.mount: Deactivated successfully. Aug 13 01:00:32.322370 env[1216]: time="2025-08-13T01:00:32.322294672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:32.324283 env[1216]: time="2025-08-13T01:00:32.324232324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:32.326935 env[1216]: time="2025-08-13T01:00:32.326900900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:32.329751 env[1216]: time="2025-08-13T01:00:32.329714038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:32.331935 env[1216]: time="2025-08-13T01:00:32.330313775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:00:32.331935 env[1216]: time="2025-08-13T01:00:32.331562574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:00:34.516458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413863929.mount: Deactivated successfully. Aug 13 01:00:38.877900 env[1216]: time="2025-08-13T01:00:38.877815939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:38.880221 env[1216]: time="2025-08-13T01:00:38.880146422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:38.882351 env[1216]: time="2025-08-13T01:00:38.882289802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:38.884285 env[1216]: time="2025-08-13T01:00:38.884247231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:38.885162 env[1216]: time="2025-08-13T01:00:38.885103126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:00:41.214256 systemd[1]: Stopped kubelet.service. Aug 13 01:00:41.216237 systemd[1]: Starting kubelet.service... Aug 13 01:00:41.238875 systemd[1]: Reloading. Aug 13 01:00:41.319661 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-08-13T01:00:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:00:41.320177 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-08-13T01:00:41Z" level=info msg="torcx already run" Aug 13 01:00:42.692041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:00:42.692061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:00:42.710002 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:00:42.793417 systemd[1]: Started kubelet.service. Aug 13 01:00:42.795044 systemd[1]: Stopping kubelet.service... Aug 13 01:00:42.798529 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:00:42.798762 systemd[1]: Stopped kubelet.service. Aug 13 01:00:42.800860 systemd[1]: Starting kubelet.service... Aug 13 01:00:42.898510 systemd[1]: Started kubelet.service. Aug 13 01:00:43.000421 kubelet[1576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:00:43.000421 kubelet[1576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:00:43.000421 kubelet[1576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:00:43.000421 kubelet[1576]: I0813 01:00:43.000352 1576 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:00:43.310669 kubelet[1576]: I0813 01:00:43.310563 1576 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:00:43.310669 kubelet[1576]: I0813 01:00:43.310593 1576 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:00:43.310846 kubelet[1576]: I0813 01:00:43.310832 1576 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:00:43.419094 kubelet[1576]: I0813 01:00:43.419024 1576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:00:43.433869 kubelet[1576]: E0813 01:00:43.433806 1576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 01:00:43.487991 kubelet[1576]: E0813 01:00:43.487934 1576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:00:43.487991 kubelet[1576]: I0813 01:00:43.487972 1576 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:00:43.492511 kubelet[1576]: I0813 01:00:43.492477 1576 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:00:43.492809 kubelet[1576]: I0813 01:00:43.492760 1576 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:00:43.492987 kubelet[1576]: I0813 01:00:43.492811 1576 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:00:43.493095 kubelet[1576]: I0813 01:00:43.492990 1576 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:00:43.493095 kubelet[1576]: I0813 01:00:43.492998 1576 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:00:43.493148 kubelet[1576]: I0813 01:00:43.493128 1576 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:00:43.525916 kubelet[1576]: I0813 01:00:43.525864 1576 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:00:43.525916 kubelet[1576]: I0813 01:00:43.525919 1576 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:00:43.526124 kubelet[1576]: I0813 01:00:43.525963 1576 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:00:43.526124 kubelet[1576]: I0813 01:00:43.525986 1576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:00:43.585279 kubelet[1576]: E0813 01:00:43.585169 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:00:43.586475 kubelet[1576]: E0813 01:00:43.586440 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:00:43.603854 kubelet[1576]: I0813 01:00:43.603838 1576 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:00:43.604341 kubelet[1576]: I0813 01:00:43.604289 1576 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:00:43.604938 kubelet[1576]: W0813 01:00:43.604917 1576 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:00:43.607193 kubelet[1576]: I0813 01:00:43.607166 1576 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:00:43.607257 kubelet[1576]: I0813 01:00:43.607216 1576 server.go:1289] "Started kubelet" Aug 13 01:00:43.623766 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 01:00:43.624229 kubelet[1576]: I0813 01:00:43.624099 1576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:00:43.644261 kubelet[1576]: I0813 01:00:43.644191 1576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:00:43.645142 kubelet[1576]: I0813 01:00:43.645104 1576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:00:43.656247 kubelet[1576]: I0813 01:00:43.656189 1576 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:00:43.656400 kubelet[1576]: I0813 01:00:43.656288 1576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:00:43.657523 kubelet[1576]: I0813 01:00:43.657459 1576 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:00:43.659128 kubelet[1576]: I0813 01:00:43.659104 1576 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:00:43.663369 kubelet[1576]: E0813 01:00:43.663334 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:43.664043 kubelet[1576]: I0813 01:00:43.663886 1576 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:00:43.666397 kubelet[1576]: I0813 01:00:43.664032 1576 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:00:43.666864 kubelet[1576]: I0813 01:00:43.666837 1576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:00:43.669283 kubelet[1576]: E0813 01:00:43.669254 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:00:43.669945 kubelet[1576]: E0813 01:00:43.669648 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Aug 13 01:00:43.670328 kubelet[1576]: E0813 01:00:43.668915 1576 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2dce3d607935 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 01:00:43.607185717 +0000 UTC m=+0.704883905,LastTimestamp:2025-08-13 01:00:43.607185717 +0000 UTC m=+0.704883905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 01:00:43.670814 kubelet[1576]: E0813 01:00:43.670749 1576 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:00:43.673285 kubelet[1576]: I0813 01:00:43.673266 1576 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:00:43.673285 kubelet[1576]: I0813 01:00:43.673280 1576 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:00:43.678020 kubelet[1576]: I0813 01:00:43.676102 1576 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:00:43.678020 kubelet[1576]: I0813 01:00:43.677007 1576 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:00:43.678020 kubelet[1576]: I0813 01:00:43.677021 1576 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:00:43.678020 kubelet[1576]: I0813 01:00:43.677121 1576 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:00:43.678020 kubelet[1576]: I0813 01:00:43.677132 1576 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:00:43.678020 kubelet[1576]: E0813 01:00:43.677189 1576 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:00:43.680509 kubelet[1576]: E0813 01:00:43.680461 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:00:43.682347 kubelet[1576]: I0813 01:00:43.682331 1576 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:00:43.682347 kubelet[1576]: I0813 01:00:43.682342 1576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:00:43.682601 kubelet[1576]: I0813 01:00:43.682356 1576 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:00:43.764786 kubelet[1576]: E0813 01:00:43.764704 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:43.778107 kubelet[1576]: E0813 01:00:43.778056 1576 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:00:43.865679 kubelet[1576]: E0813 01:00:43.865490 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:43.871204 kubelet[1576]: E0813 01:00:43.871166 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Aug 13 01:00:43.965986 kubelet[1576]: E0813 01:00:43.965913 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:43.978188 kubelet[1576]: E0813 01:00:43.978159 1576 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:00:44.066103 kubelet[1576]: E0813 01:00:44.066042 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:44.167162 kubelet[1576]: E0813 01:00:44.167008 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:44.267696 kubelet[1576]: E0813 01:00:44.267626 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:44.272324 kubelet[1576]: E0813 01:00:44.272282 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Aug 13 01:00:44.367745 kubelet[1576]: E0813 01:00:44.367693 1576 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:00:44.379011 kubelet[1576]: E0813 01:00:44.378970 1576 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:00:44.381587 kubelet[1576]: I0813 01:00:44.381556 1576 policy_none.go:49] "None policy: Start" Aug 13 01:00:44.381587 kubelet[1576]: I0813 01:00:44.381583 1576 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:00:44.381675 kubelet[1576]: I0813 01:00:44.381598 1576 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:00:44.391461 systemd[1]: Created slice kubepods.slice. Aug 13 01:00:44.395389 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 01:00:44.397861 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 01:00:44.403387 kubelet[1576]: E0813 01:00:44.403354 1576 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:00:44.403675 kubelet[1576]: I0813 01:00:44.403515 1576 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:00:44.403675 kubelet[1576]: I0813 01:00:44.403571 1576 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:00:44.403801 kubelet[1576]: I0813 01:00:44.403784 1576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:00:44.404359 kubelet[1576]: E0813 01:00:44.404324 1576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:00:44.404409 kubelet[1576]: E0813 01:00:44.404371 1576 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 01:00:44.505364 kubelet[1576]: I0813 01:00:44.505226 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:44.505671 kubelet[1576]: E0813 01:00:44.505621 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 13 01:00:44.509283 kubelet[1576]: E0813 01:00:44.509257 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:00:44.707570 kubelet[1576]: I0813 01:00:44.707509 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:44.707868 kubelet[1576]: E0813 01:00:44.707840 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 13 01:00:45.015126 kubelet[1576]: E0813 01:00:45.015083 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:00:45.020659 kubelet[1576]: E0813 01:00:45.020618 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:00:45.073440 kubelet[1576]: E0813 01:00:45.073410 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" Aug 13 01:00:45.108825 kubelet[1576]: I0813 01:00:45.108802 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:45.109092 kubelet[1576]: E0813 01:00:45.109063 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 13 01:00:45.190418 kubelet[1576]: E0813 01:00:45.189238 1576 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:00:45.189812 systemd[1]: Created slice kubepods-burstable-pod34670bfc8d31350d2043b335c9905c02.slice. Aug 13 01:00:45.203518 kubelet[1576]: E0813 01:00:45.203458 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:45.206475 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Aug 13 01:00:45.208156 kubelet[1576]: E0813 01:00:45.208137 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:45.210217 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Aug 13 01:00:45.211873 kubelet[1576]: E0813 01:00:45.211846 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:45.275522 kubelet[1576]: I0813 01:00:45.275325 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:45.275522 kubelet[1576]: I0813 01:00:45.275374 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:45.275522 kubelet[1576]: I0813 01:00:45.275434 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:45.275853 kubelet[1576]: I0813 01:00:45.275528 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:45.275853 kubelet[1576]: I0813 01:00:45.275596 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:45.275853 kubelet[1576]: I0813 01:00:45.275622 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:45.275853 kubelet[1576]: I0813 01:00:45.275652 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:45.275853 kubelet[1576]: I0813 01:00:45.275676 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:45.276033 kubelet[1576]: I0813 01:00:45.275699 1576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:45.504807 kubelet[1576]: E0813 01:00:45.504706 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:45.505658 env[1216]: time="2025-08-13T01:00:45.505601379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34670bfc8d31350d2043b335c9905c02,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:45.508862 kubelet[1576]: E0813 01:00:45.508825 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:45.509355 env[1216]: time="2025-08-13T01:00:45.509318714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:45.512498 kubelet[1576]: E0813 01:00:45.512480 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:45.512987 env[1216]: time="2025-08-13T01:00:45.512948063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:45.524423 kubelet[1576]: E0813 01:00:45.524371 1576 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 01:00:45.911018 kubelet[1576]: I0813 01:00:45.910961 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:45.911384 kubelet[1576]: E0813 01:00:45.911359 1576 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Aug 13 01:00:46.546944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846302230.mount: Deactivated successfully. Aug 13 01:00:46.554357 env[1216]: time="2025-08-13T01:00:46.554276346Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.559537 env[1216]: time="2025-08-13T01:00:46.559464518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.561284 env[1216]: time="2025-08-13T01:00:46.561254906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.562733 env[1216]: time="2025-08-13T01:00:46.562651815Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.566545 env[1216]: time="2025-08-13T01:00:46.566490177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.567656 env[1216]: time="2025-08-13T01:00:46.567620780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.568973 env[1216]: time="2025-08-13T01:00:46.568940490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.570440 env[1216]: time="2025-08-13T01:00:46.570388567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.572511 env[1216]: time="2025-08-13T01:00:46.572480453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.574432 env[1216]: time="2025-08-13T01:00:46.574405421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.575034 env[1216]: time="2025-08-13T01:00:46.575008516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.575843 env[1216]: time="2025-08-13T01:00:46.575809737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:00:46.625644 env[1216]: time="2025-08-13T01:00:46.625422351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:46.625644 env[1216]: time="2025-08-13T01:00:46.625479073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:46.625644 env[1216]: time="2025-08-13T01:00:46.625493626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:46.625908 env[1216]: time="2025-08-13T01:00:46.625848497Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b21dad21d0d47b6706f3f3dc3e00313ddf458e18edf49babb6eced853a73cc0 pid=1622 runtime=io.containerd.runc.v2 Aug 13 01:00:46.628377 env[1216]: time="2025-08-13T01:00:46.628153785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:46.628377 env[1216]: time="2025-08-13T01:00:46.628201134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:46.628377 env[1216]: time="2025-08-13T01:00:46.628214666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:46.628596 env[1216]: time="2025-08-13T01:00:46.628553779Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d67caabfb5a6d7f3ca15745f7bcc4d2bbfe97f168f94248ca22714573abdaa4 pid=1636 runtime=io.containerd.runc.v2 Aug 13 01:00:46.635993 env[1216]: time="2025-08-13T01:00:46.635898827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:46.636229 env[1216]: time="2025-08-13T01:00:46.636204343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:46.636333 env[1216]: time="2025-08-13T01:00:46.636308384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:46.637604 env[1216]: time="2025-08-13T01:00:46.636600358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91042b02e6a6a81821058b53f1e96f1411e9e507ebd9e0e521e7f2d1a55a2d97 pid=1655 runtime=io.containerd.runc.v2 Aug 13 01:00:46.644166 systemd[1]: Started cri-containerd-8b21dad21d0d47b6706f3f3dc3e00313ddf458e18edf49babb6eced853a73cc0.scope. Aug 13 01:00:46.689506 systemd[1]: Started cri-containerd-0d67caabfb5a6d7f3ca15745f7bcc4d2bbfe97f168f94248ca22714573abdaa4.scope. Aug 13 01:00:46.690003 kubelet[1576]: E0813 01:00:46.689880 1576 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="3.2s" Aug 13 01:00:46.696176 systemd[1]: Started cri-containerd-91042b02e6a6a81821058b53f1e96f1411e9e507ebd9e0e521e7f2d1a55a2d97.scope. Aug 13 01:00:46.760415 env[1216]: time="2025-08-13T01:00:46.760343838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b21dad21d0d47b6706f3f3dc3e00313ddf458e18edf49babb6eced853a73cc0\"" Aug 13 01:00:46.762093 kubelet[1576]: E0813 01:00:46.761813 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:46.773503 env[1216]: time="2025-08-13T01:00:46.771804348Z" level=info msg="CreateContainer within sandbox \"8b21dad21d0d47b6706f3f3dc3e00313ddf458e18edf49babb6eced853a73cc0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:00:46.775988 env[1216]: time="2025-08-13T01:00:46.775935216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34670bfc8d31350d2043b335c9905c02,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d67caabfb5a6d7f3ca15745f7bcc4d2bbfe97f168f94248ca22714573abdaa4\"" Aug 13 01:00:46.776717 kubelet[1576]: E0813 01:00:46.776572 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:46.781340 env[1216]: time="2025-08-13T01:00:46.781285382Z" level=info msg="CreateContainer within sandbox \"0d67caabfb5a6d7f3ca15745f7bcc4d2bbfe97f168f94248ca22714573abdaa4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:00:46.790104 env[1216]: time="2025-08-13T01:00:46.790055783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"91042b02e6a6a81821058b53f1e96f1411e9e507ebd9e0e521e7f2d1a55a2d97\"" Aug 13 01:00:46.790896 kubelet[1576]: E0813 01:00:46.790862 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:46.796153 env[1216]: time="2025-08-13T01:00:46.796113555Z" level=info msg="CreateContainer within sandbox \"91042b02e6a6a81821058b53f1e96f1411e9e507ebd9e0e521e7f2d1a55a2d97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:00:46.802404 env[1216]: time="2025-08-13T01:00:46.802300887Z" level=info msg="CreateContainer within sandbox \"8b21dad21d0d47b6706f3f3dc3e00313ddf458e18edf49babb6eced853a73cc0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f03d5051cb9ff9a61e382006d3cf3517590f534eb3b0d99ec57ebebef5dc603\"" Aug 13 01:00:46.803308 env[1216]: time="2025-08-13T01:00:46.803281153Z" level=info msg="StartContainer for \"1f03d5051cb9ff9a61e382006d3cf3517590f534eb3b0d99ec57ebebef5dc603\"" Aug 13 01:00:46.817155 env[1216]: time="2025-08-13T01:00:46.817089770Z" level=info msg="CreateContainer within sandbox \"0d67caabfb5a6d7f3ca15745f7bcc4d2bbfe97f168f94248ca22714573abdaa4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e88b512fcd5ac53931fd84b00d6305fe59049b790a0427d224b7359dcd2bee3e\"" Aug 13 01:00:46.817761 env[1216]: time="2025-08-13T01:00:46.817730120Z" level=info msg="StartContainer for \"e88b512fcd5ac53931fd84b00d6305fe59049b790a0427d224b7359dcd2bee3e\"" Aug 13 01:00:46.819467 systemd[1]: Started cri-containerd-1f03d5051cb9ff9a61e382006d3cf3517590f534eb3b0d99ec57ebebef5dc603.scope. Aug 13 01:00:46.823144 env[1216]: time="2025-08-13T01:00:46.823094939Z" level=info msg="CreateContainer within sandbox \"91042b02e6a6a81821058b53f1e96f1411e9e507ebd9e0e521e7f2d1a55a2d97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b19e97491c4e8e0f5014923fe8202b8374c239a5cded30b671e85fdffd6cbd94\"" Aug 13 01:00:46.823638 env[1216]: time="2025-08-13T01:00:46.823600568Z" level=info msg="StartContainer for \"b19e97491c4e8e0f5014923fe8202b8374c239a5cded30b671e85fdffd6cbd94\"" Aug 13 01:00:46.836811 systemd[1]: Started cri-containerd-e88b512fcd5ac53931fd84b00d6305fe59049b790a0427d224b7359dcd2bee3e.scope. Aug 13 01:00:46.847624 systemd[1]: Started cri-containerd-b19e97491c4e8e0f5014923fe8202b8374c239a5cded30b671e85fdffd6cbd94.scope. Aug 13 01:00:46.901031 env[1216]: time="2025-08-13T01:00:46.900955325Z" level=info msg="StartContainer for \"e88b512fcd5ac53931fd84b00d6305fe59049b790a0427d224b7359dcd2bee3e\" returns successfully" Aug 13 01:00:46.904369 env[1216]: time="2025-08-13T01:00:46.904315723Z" level=info msg="StartContainer for \"1f03d5051cb9ff9a61e382006d3cf3517590f534eb3b0d99ec57ebebef5dc603\" returns successfully" Aug 13 01:00:46.915246 env[1216]: time="2025-08-13T01:00:46.915189867Z" level=info msg="StartContainer for \"b19e97491c4e8e0f5014923fe8202b8374c239a5cded30b671e85fdffd6cbd94\" returns successfully" Aug 13 01:00:47.513000 kubelet[1576]: I0813 01:00:47.512962 1576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:47.694902 kubelet[1576]: E0813 01:00:47.694642 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:47.694902 kubelet[1576]: E0813 01:00:47.694803 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:47.699143 kubelet[1576]: E0813 01:00:47.698690 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:47.699143 kubelet[1576]: E0813 01:00:47.698820 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:47.699143 kubelet[1576]: E0813 01:00:47.698995 1576 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 01:00:47.699143 kubelet[1576]: E0813 01:00:47.699073 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:48.425861 kubelet[1576]: I0813 01:00:48.425811 1576 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 01:00:48.425861 kubelet[1576]: E0813 01:00:48.425855 1576 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 01:00:48.464972 kubelet[1576]: I0813 01:00:48.464929 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:48.476044 kubelet[1576]: E0813 01:00:48.476006 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:48.476258 kubelet[1576]: I0813 01:00:48.476241 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:48.477620 kubelet[1576]: E0813 01:00:48.477600 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:48.477714 kubelet[1576]: I0813 01:00:48.477700 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:48.479492 kubelet[1576]: E0813 01:00:48.479468 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:48.569268 kubelet[1576]: I0813 01:00:48.569213 1576 apiserver.go:52] "Watching apiserver" Aug 13 01:00:48.664593 kubelet[1576]: I0813 01:00:48.664528 1576 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:00:48.700474 kubelet[1576]: I0813 01:00:48.700344 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:48.700474 kubelet[1576]: I0813 01:00:48.700417 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:48.702188 kubelet[1576]: E0813 01:00:48.702167 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:48.702341 kubelet[1576]: E0813 01:00:48.702308 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:48.702625 kubelet[1576]: E0813 01:00:48.702600 1576 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:48.702708 kubelet[1576]: E0813 01:00:48.702694 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:49.701496 kubelet[1576]: I0813 01:00:49.701456 1576 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:49.705339 kubelet[1576]: E0813 01:00:49.705301 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:50.703594 kubelet[1576]: E0813 01:00:50.703539 1576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:50.943210 systemd[1]: Reloading. Aug 13 01:00:51.025164 /usr/lib/systemd/system-generators/torcx-generator[1881]: time="2025-08-13T01:00:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:00:51.025195 /usr/lib/systemd/system-generators/torcx-generator[1881]: time="2025-08-13T01:00:51Z" level=info msg="torcx already run" Aug 13 01:00:51.087712 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:00:51.087730 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:00:51.105935 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:00:51.200426 systemd[1]: Stopping kubelet.service... Aug 13 01:00:51.200639 kubelet[1576]: I0813 01:00:51.200458 1576 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:00:51.220371 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:00:51.220573 systemd[1]: Stopped kubelet.service. Aug 13 01:00:51.222362 systemd[1]: Starting kubelet.service... Aug 13 01:00:51.343479 systemd[1]: Started kubelet.service. Aug 13 01:00:51.390688 kubelet[1928]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:00:51.390688 kubelet[1928]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:00:51.390688 kubelet[1928]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:00:51.391122 kubelet[1928]: I0813 01:00:51.390735 1928 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:00:51.396696 kubelet[1928]: I0813 01:00:51.396658 1928 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:00:51.396696 kubelet[1928]: I0813 01:00:51.396687 1928 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:00:51.397015 kubelet[1928]: I0813 01:00:51.396991 1928 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:00:51.398132 kubelet[1928]: I0813 01:00:51.398102 1928 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 01:00:51.400403 kubelet[1928]: I0813 01:00:51.400354 1928 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:00:51.405419 kubelet[1928]: E0813 01:00:51.405380 1928 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:00:51.405419 kubelet[1928]: I0813 01:00:51.405417 1928 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:00:51.408894 kubelet[1928]: I0813 01:00:51.408873 1928 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:00:51.409054 kubelet[1928]: I0813 01:00:51.409025 1928 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:00:51.409182 kubelet[1928]: I0813 01:00:51.409046 1928 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:00:51.409277 kubelet[1928]: I0813 01:00:51.409190 1928 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:00:51.409277 kubelet[1928]: I0813 01:00:51.409198 1928 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:00:51.409277 kubelet[1928]: I0813 01:00:51.409235 1928 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:00:51.409378 kubelet[1928]: I0813 01:00:51.409367 1928 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:00:51.409406 kubelet[1928]: I0813 01:00:51.409383 1928 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:00:51.409406 kubelet[1928]: I0813 01:00:51.409403 1928 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:00:51.409450 kubelet[1928]: I0813 01:00:51.409415 1928 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:00:51.410827 kubelet[1928]: I0813 01:00:51.410788 1928 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:00:51.413313 kubelet[1928]: I0813 01:00:51.411389 1928 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:00:51.419920 kubelet[1928]: I0813 01:00:51.419880 1928 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:00:51.420000 kubelet[1928]: I0813 01:00:51.419960 1928 server.go:1289] "Started kubelet" Aug 13 01:00:51.421064 kubelet[1928]: I0813 01:00:51.421045 1928 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:00:51.421531 kubelet[1928]: I0813 01:00:51.421483 1928 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:00:51.423528 kubelet[1928]: I0813 01:00:51.423501 1928 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:00:51.424704 kubelet[1928]: I0813 01:00:51.424638 1928 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:00:51.425086 kubelet[1928]: I0813 01:00:51.425048 1928 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:00:51.429931 kubelet[1928]: I0813 01:00:51.429890 1928 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:00:51.430345 kubelet[1928]: I0813 01:00:51.430307 1928 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:00:51.431219 kubelet[1928]: I0813 01:00:51.431185 1928 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:00:51.432624 kubelet[1928]: I0813 01:00:51.432594 1928 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:00:51.433757 kubelet[1928]: I0813 01:00:51.433713 1928 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:00:51.433757 kubelet[1928]: I0813 01:00:51.433742 1928 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:00:51.433892 kubelet[1928]: I0813 01:00:51.433841 1928 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:00:51.450877 kubelet[1928]: I0813 01:00:51.450831 1928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:00:51.451981 kubelet[1928]: I0813 01:00:51.451817 1928 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:00:51.451981 kubelet[1928]: I0813 01:00:51.451836 1928 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:00:51.451981 kubelet[1928]: I0813 01:00:51.451901 1928 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:00:51.451981 kubelet[1928]: I0813 01:00:51.451910 1928 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:00:51.451981 kubelet[1928]: E0813 01:00:51.451970 1928 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:00:51.471440 kubelet[1928]: I0813 01:00:51.471391 1928 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:00:51.471440 kubelet[1928]: I0813 01:00:51.471416 1928 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:00:51.471440 kubelet[1928]: I0813 01:00:51.471448 1928 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:00:51.471729 kubelet[1928]: I0813 01:00:51.471611 1928 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:00:51.471729 kubelet[1928]: I0813 01:00:51.471622 1928 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:00:51.471729 kubelet[1928]: I0813 01:00:51.471646 1928 policy_none.go:49] "None policy: Start" Aug 13 01:00:51.471729 kubelet[1928]: I0813 01:00:51.471655 1928 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:00:51.471729 kubelet[1928]: I0813 01:00:51.471664 1928 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:00:51.471937 kubelet[1928]: I0813 01:00:51.471762 1928 state_mem.go:75] "Updated machine memory state" Aug 13 01:00:51.476159 kubelet[1928]: E0813 01:00:51.476127 1928 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:00:51.476401 kubelet[1928]: I0813 01:00:51.476367 1928 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:00:51.476588 kubelet[1928]: I0813 01:00:51.476399 1928 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:00:51.476881 kubelet[1928]: I0813 01:00:51.476852 1928 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:00:51.477630 kubelet[1928]: E0813 01:00:51.477599 1928 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:00:51.552709 kubelet[1928]: I0813 01:00:51.552656 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:51.552943 kubelet[1928]: I0813 01:00:51.552888 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.552943 kubelet[1928]: I0813 01:00:51.552904 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:51.560501 kubelet[1928]: E0813 01:00:51.560448 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:51.584229 kubelet[1928]: I0813 01:00:51.584179 1928 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 01:00:51.592022 kubelet[1928]: I0813 01:00:51.591973 1928 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 01:00:51.592218 kubelet[1928]: I0813 01:00:51.592089 1928 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 01:00:51.633692 kubelet[1928]: I0813 01:00:51.633539 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.633692 kubelet[1928]: I0813 01:00:51.633580 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.633692 kubelet[1928]: I0813 01:00:51.633621 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.633692 kubelet[1928]: I0813 01:00:51.633642 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:51.634062 kubelet[1928]: I0813 01:00:51.633701 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:51.634062 kubelet[1928]: I0813 01:00:51.633807 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.634062 kubelet[1928]: I0813 01:00:51.633838 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:00:51.634062 kubelet[1928]: I0813 01:00:51.633862 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:51.634062 kubelet[1928]: I0813 01:00:51.633891 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34670bfc8d31350d2043b335c9905c02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34670bfc8d31350d2043b335c9905c02\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:51.861504 kubelet[1928]: E0813 01:00:51.861385 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:51.861722 kubelet[1928]: E0813 01:00:51.861547 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:51.861722 kubelet[1928]: E0813 01:00:51.861553 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:52.410844 kubelet[1928]: I0813 01:00:52.410761 1928 apiserver.go:52] "Watching apiserver" Aug 13 01:00:52.431389 kubelet[1928]: I0813 01:00:52.431338 1928 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:00:52.461204 kubelet[1928]: I0813 01:00:52.461158 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:52.461399 kubelet[1928]: I0813 01:00:52.461309 1928 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:52.461399 kubelet[1928]: E0813 01:00:52.461342 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:52.509079 kubelet[1928]: E0813 01:00:52.508993 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 01:00:52.509283 kubelet[1928]: E0813 01:00:52.509243 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:52.510441 kubelet[1928]: E0813 01:00:52.510397 1928 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 01:00:52.510548 kubelet[1928]: E0813 01:00:52.510515 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:52.512056 sudo[1966]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:00:52.512341 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 01:00:52.533621 kubelet[1928]: I0813 01:00:52.533534 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5335114330000001 podStartE2EDuration="1.533511433s" podCreationTimestamp="2025-08-13 01:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:00:52.50964537 +0000 UTC m=+1.157794876" watchObservedRunningTime="2025-08-13 01:00:52.533511433 +0000 UTC m=+1.181660929" Aug 13 01:00:52.544185 kubelet[1928]: I0813 01:00:52.544086 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.544069196 podStartE2EDuration="3.544069196s" podCreationTimestamp="2025-08-13 01:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:00:52.543126108 +0000 UTC m=+1.191275614" watchObservedRunningTime="2025-08-13 01:00:52.544069196 +0000 UTC m=+1.192218702" Aug 13 01:00:52.544383 kubelet[1928]: I0813 01:00:52.544302 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5442975859999999 podStartE2EDuration="1.544297586s" podCreationTimestamp="2025-08-13 01:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:00:52.534835566 +0000 UTC m=+1.182985072" watchObservedRunningTime="2025-08-13 01:00:52.544297586 +0000 UTC m=+1.192447092" Aug 13 01:00:53.075795 sudo[1966]: pam_unix(sudo:session): session closed for user root Aug 13 01:00:53.462575 kubelet[1928]: E0813 01:00:53.462442 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:53.462575 kubelet[1928]: E0813 01:00:53.462457 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:54.057986 update_engine[1206]: I0813 01:00:54.057899 1206 update_attempter.cc:509] Updating boot flags... Aug 13 01:00:54.465232 kubelet[1928]: E0813 01:00:54.464204 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:54.815240 sudo[1314]: pam_unix(sudo:session): session closed for user root Aug 13 01:00:54.829848 sshd[1311]: pam_unix(sshd:session): session closed for user core Aug 13 01:00:54.831910 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:41404.service: Deactivated successfully. Aug 13 01:00:54.832658 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:00:54.832825 systemd[1]: session-5.scope: Consumed 4.723s CPU time. Aug 13 01:00:54.833205 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:00:54.833985 systemd-logind[1204]: Removed session 5. Aug 13 01:00:56.627047 kubelet[1928]: E0813 01:00:56.626984 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:57.116948 kubelet[1928]: I0813 01:00:57.116904 1928 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:00:57.117512 env[1216]: time="2025-08-13T01:00:57.117468676Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:00:57.117896 kubelet[1928]: I0813 01:00:57.117638 1928 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:00:57.468057 kubelet[1928]: E0813 01:00:57.467906 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.046797 systemd[1]: Created slice kubepods-besteffort-pode0d50112_37a9_486c_9d04_06a808bc3837.slice. Aug 13 01:00:58.060895 systemd[1]: Created slice kubepods-burstable-pod8fb65264_0805_48a0_8cae_9da614d07b43.slice. Aug 13 01:00:58.079061 kubelet[1928]: I0813 01:00:58.078996 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0d50112-37a9-486c-9d04-06a808bc3837-kube-proxy\") pod \"kube-proxy-xvrgp\" (UID: \"e0d50112-37a9-486c-9d04-06a808bc3837\") " pod="kube-system/kube-proxy-xvrgp" Aug 13 01:00:58.079061 kubelet[1928]: I0813 01:00:58.079052 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-bpf-maps\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079061 kubelet[1928]: I0813 01:00:58.079072 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fb65264-0805-48a0-8cae-9da614d07b43-clustermesh-secrets\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079061 kubelet[1928]: I0813 01:00:58.079088 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-run\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079101 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-hostproc\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079116 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cni-path\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079128 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-config-path\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079141 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-net\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079156 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-hubble-tls\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079590 kubelet[1928]: I0813 01:00:58.079169 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99pl\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-kube-api-access-x99pl\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079183 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0d50112-37a9-486c-9d04-06a808bc3837-xtables-lock\") pod \"kube-proxy-xvrgp\" (UID: \"e0d50112-37a9-486c-9d04-06a808bc3837\") " pod="kube-system/kube-proxy-xvrgp" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079213 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-cgroup\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079225 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-etc-cni-netd\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079239 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-xtables-lock\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079254 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-kernel\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.079862 kubelet[1928]: I0813 01:00:58.079270 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0d50112-37a9-486c-9d04-06a808bc3837-lib-modules\") pod \"kube-proxy-xvrgp\" (UID: \"e0d50112-37a9-486c-9d04-06a808bc3837\") " pod="kube-system/kube-proxy-xvrgp" Aug 13 01:00:58.080013 kubelet[1928]: I0813 01:00:58.079285 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlls8\" (UniqueName: \"kubernetes.io/projected/e0d50112-37a9-486c-9d04-06a808bc3837-kube-api-access-mlls8\") pod \"kube-proxy-xvrgp\" (UID: \"e0d50112-37a9-486c-9d04-06a808bc3837\") " pod="kube-system/kube-proxy-xvrgp" Aug 13 01:00:58.080013 kubelet[1928]: I0813 01:00:58.079302 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-lib-modules\") pod \"cilium-gz74w\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " pod="kube-system/cilium-gz74w" Aug 13 01:00:58.181029 kubelet[1928]: I0813 01:00:58.180975 1928 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:00:58.318330 systemd[1]: Created slice kubepods-besteffort-podaf8768f7_f493_4b32_96af_bc4f16fe8d10.slice. Aug 13 01:00:58.357735 kubelet[1928]: E0813 01:00:58.357628 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.359023 env[1216]: time="2025-08-13T01:00:58.358972888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvrgp,Uid:e0d50112-37a9-486c-9d04-06a808bc3837,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:58.363540 kubelet[1928]: E0813 01:00:58.363272 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.364087 env[1216]: time="2025-08-13T01:00:58.363979316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz74w,Uid:8fb65264-0805-48a0-8cae-9da614d07b43,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:58.379455 env[1216]: time="2025-08-13T01:00:58.379342315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:58.379455 env[1216]: time="2025-08-13T01:00:58.379412222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:58.379978 env[1216]: time="2025-08-13T01:00:58.379427364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:58.379978 env[1216]: time="2025-08-13T01:00:58.379723309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8e3fb49da23e3951803fcd03b0bfa17ba44c922b912b71ff2033b3917ed6292 pid=2043 runtime=io.containerd.runc.v2 Aug 13 01:00:58.381700 kubelet[1928]: I0813 01:00:58.381632 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hjd\" (UniqueName: \"kubernetes.io/projected/af8768f7-f493-4b32-96af-bc4f16fe8d10-kube-api-access-v4hjd\") pod \"cilium-operator-6c4d7847fc-dlmh8\" (UID: \"af8768f7-f493-4b32-96af-bc4f16fe8d10\") " pod="kube-system/cilium-operator-6c4d7847fc-dlmh8" Aug 13 01:00:58.381700 kubelet[1928]: I0813 01:00:58.381694 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af8768f7-f493-4b32-96af-bc4f16fe8d10-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dlmh8\" (UID: \"af8768f7-f493-4b32-96af-bc4f16fe8d10\") " pod="kube-system/cilium-operator-6c4d7847fc-dlmh8" Aug 13 01:00:58.406858 systemd[1]: Started cri-containerd-f8e3fb49da23e3951803fcd03b0bfa17ba44c922b912b71ff2033b3917ed6292.scope. Aug 13 01:00:58.409125 env[1216]: time="2025-08-13T01:00:58.409041665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:58.409125 env[1216]: time="2025-08-13T01:00:58.409082471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:58.409125 env[1216]: time="2025-08-13T01:00:58.409094556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:58.409964 env[1216]: time="2025-08-13T01:00:58.409874874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3 pid=2071 runtime=io.containerd.runc.v2 Aug 13 01:00:58.423874 systemd[1]: Started cri-containerd-0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3.scope. Aug 13 01:00:58.443873 env[1216]: time="2025-08-13T01:00:58.443815286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvrgp,Uid:e0d50112-37a9-486c-9d04-06a808bc3837,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8e3fb49da23e3951803fcd03b0bfa17ba44c922b912b71ff2033b3917ed6292\"" Aug 13 01:00:58.445490 kubelet[1928]: E0813 01:00:58.445212 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.452021 env[1216]: time="2025-08-13T01:00:58.451951941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz74w,Uid:8fb65264-0805-48a0-8cae-9da614d07b43,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\"" Aug 13 01:00:58.453243 env[1216]: time="2025-08-13T01:00:58.453187679Z" level=info msg="CreateContainer within sandbox \"f8e3fb49da23e3951803fcd03b0bfa17ba44c922b912b71ff2033b3917ed6292\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:00:58.453844 kubelet[1928]: E0813 01:00:58.453820 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.455752 env[1216]: time="2025-08-13T01:00:58.455692597Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:00:58.476190 env[1216]: time="2025-08-13T01:00:58.476083449Z" level=info msg="CreateContainer within sandbox \"f8e3fb49da23e3951803fcd03b0bfa17ba44c922b912b71ff2033b3917ed6292\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04cda81052ecd5c24e58805870ecb10a56258d44f9a60b263c68b410b3859d07\"" Aug 13 01:00:58.477042 env[1216]: time="2025-08-13T01:00:58.476965681Z" level=info msg="StartContainer for \"04cda81052ecd5c24e58805870ecb10a56258d44f9a60b263c68b410b3859d07\"" Aug 13 01:00:58.504066 systemd[1]: Started cri-containerd-04cda81052ecd5c24e58805870ecb10a56258d44f9a60b263c68b410b3859d07.scope. Aug 13 01:00:58.539450 env[1216]: time="2025-08-13T01:00:58.539362129Z" level=info msg="StartContainer for \"04cda81052ecd5c24e58805870ecb10a56258d44f9a60b263c68b410b3859d07\" returns successfully" Aug 13 01:00:58.623320 kubelet[1928]: E0813 01:00:58.623262 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:58.623936 env[1216]: time="2025-08-13T01:00:58.623873387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dlmh8,Uid:af8768f7-f493-4b32-96af-bc4f16fe8d10,Namespace:kube-system,Attempt:0,}" Aug 13 01:00:58.644474 env[1216]: time="2025-08-13T01:00:58.643180011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:00:58.644474 env[1216]: time="2025-08-13T01:00:58.643232141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:00:58.644474 env[1216]: time="2025-08-13T01:00:58.643244907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:00:58.645683 env[1216]: time="2025-08-13T01:00:58.644832358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8 pid=2174 runtime=io.containerd.runc.v2 Aug 13 01:00:58.658672 systemd[1]: Started cri-containerd-44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8.scope. Aug 13 01:00:58.699483 env[1216]: time="2025-08-13T01:00:58.698733414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dlmh8,Uid:af8768f7-f493-4b32-96af-bc4f16fe8d10,Namespace:kube-system,Attempt:0,} returns sandbox id \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\"" Aug 13 01:00:58.699677 kubelet[1928]: E0813 01:00:58.699597 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:00:59.475929 kubelet[1928]: E0813 01:00:59.475864 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:00.479547 kubelet[1928]: E0813 01:01:00.479495 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:00.985157 kubelet[1928]: E0813 01:01:00.985103 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:01.066105 kubelet[1928]: I0813 01:01:01.065955 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvrgp" podStartSLOduration=3.065921256 podStartE2EDuration="3.065921256s" podCreationTimestamp="2025-08-13 01:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:00:59.487643166 +0000 UTC m=+8.135792692" watchObservedRunningTime="2025-08-13 01:01:01.065921256 +0000 UTC m=+9.714070762" Aug 13 01:01:01.481294 kubelet[1928]: E0813 01:01:01.481049 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:03.291889 kubelet[1928]: E0813 01:01:03.291842 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:03.485136 kubelet[1928]: E0813 01:01:03.485097 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:07.256578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263909870.mount: Deactivated successfully. Aug 13 01:01:13.904698 env[1216]: time="2025-08-13T01:01:13.904632301Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:13.988575 env[1216]: time="2025-08-13T01:01:13.988501260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:14.021150 env[1216]: time="2025-08-13T01:01:14.021093033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:14.021968 env[1216]: time="2025-08-13T01:01:14.021894657Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:01:14.023070 env[1216]: time="2025-08-13T01:01:14.023022593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:01:14.196268 env[1216]: time="2025-08-13T01:01:14.196110199Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:01:14.545909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304948469.mount: Deactivated successfully. Aug 13 01:01:17.419421 env[1216]: time="2025-08-13T01:01:17.419357866Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\"" Aug 13 01:01:17.419797 env[1216]: time="2025-08-13T01:01:17.419757101Z" level=info msg="StartContainer for \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\"" Aug 13 01:01:17.436682 systemd[1]: Started cri-containerd-ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888.scope. Aug 13 01:01:17.471820 systemd[1]: cri-containerd-ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888.scope: Deactivated successfully. Aug 13 01:01:18.275042 env[1216]: time="2025-08-13T01:01:18.274972275Z" level=info msg="StartContainer for \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\" returns successfully" Aug 13 01:01:18.288672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888-rootfs.mount: Deactivated successfully. Aug 13 01:01:18.840283 env[1216]: time="2025-08-13T01:01:18.840191911Z" level=info msg="shim disconnected" id=ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888 Aug 13 01:01:18.840283 env[1216]: time="2025-08-13T01:01:18.840246349Z" level=warning msg="cleaning up after shim disconnected" id=ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888 namespace=k8s.io Aug 13 01:01:18.840283 env[1216]: time="2025-08-13T01:01:18.840254896Z" level=info msg="cleaning up dead shim" Aug 13 01:01:18.846986 env[1216]: time="2025-08-13T01:01:18.846936308Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2385 runtime=io.containerd.runc.v2\n" Aug 13 01:01:19.280104 kubelet[1928]: E0813 01:01:19.279956 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:19.296454 env[1216]: time="2025-08-13T01:01:19.294023092Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:01:19.310976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534085293.mount: Deactivated successfully. Aug 13 01:01:19.312198 env[1216]: time="2025-08-13T01:01:19.312163022Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\"" Aug 13 01:01:19.312754 env[1216]: time="2025-08-13T01:01:19.312712493Z" level=info msg="StartContainer for \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\"" Aug 13 01:01:19.331036 systemd[1]: Started cri-containerd-639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44.scope. Aug 13 01:01:19.352898 env[1216]: time="2025-08-13T01:01:19.352842169Z" level=info msg="StartContainer for \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\" returns successfully" Aug 13 01:01:19.363120 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:01:19.363484 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:01:19.364160 systemd[1]: Stopping systemd-sysctl.service... Aug 13 01:01:19.365861 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:01:19.369073 systemd[1]: cri-containerd-639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44.scope: Deactivated successfully. Aug 13 01:01:19.375591 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:01:19.391792 env[1216]: time="2025-08-13T01:01:19.391718832Z" level=info msg="shim disconnected" id=639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44 Aug 13 01:01:19.391968 env[1216]: time="2025-08-13T01:01:19.391793610Z" level=warning msg="cleaning up after shim disconnected" id=639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44 namespace=k8s.io Aug 13 01:01:19.391968 env[1216]: time="2025-08-13T01:01:19.391809442Z" level=info msg="cleaning up dead shim" Aug 13 01:01:19.398394 env[1216]: time="2025-08-13T01:01:19.398343041Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2447 runtime=io.containerd.runc.v2\n" Aug 13 01:01:20.283549 kubelet[1928]: E0813 01:01:20.283508 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:20.307524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44-rootfs.mount: Deactivated successfully. Aug 13 01:01:20.402660 env[1216]: time="2025-08-13T01:01:20.402594810Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:01:20.566887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941183180.mount: Deactivated successfully. Aug 13 01:01:20.722133 env[1216]: time="2025-08-13T01:01:20.722047753Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\"" Aug 13 01:01:20.722706 env[1216]: time="2025-08-13T01:01:20.722660287Z" level=info msg="StartContainer for \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\"" Aug 13 01:01:20.743920 systemd[1]: Started cri-containerd-1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48.scope. Aug 13 01:01:20.772397 systemd[1]: cri-containerd-1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48.scope: Deactivated successfully. Aug 13 01:01:20.797680 env[1216]: time="2025-08-13T01:01:20.797623730Z" level=info msg="StartContainer for \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\" returns successfully" Aug 13 01:01:21.307493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48-rootfs.mount: Deactivated successfully. Aug 13 01:01:21.392895 kubelet[1928]: E0813 01:01:21.392825 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:21.511497 env[1216]: time="2025-08-13T01:01:21.511384018Z" level=info msg="shim disconnected" id=1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48 Aug 13 01:01:21.511497 env[1216]: time="2025-08-13T01:01:21.511462203Z" level=warning msg="cleaning up after shim disconnected" id=1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48 namespace=k8s.io Aug 13 01:01:21.511497 env[1216]: time="2025-08-13T01:01:21.511474998Z" level=info msg="cleaning up dead shim" Aug 13 01:01:21.520634 env[1216]: time="2025-08-13T01:01:21.520584057Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2502 runtime=io.containerd.runc.v2\n" Aug 13 01:01:22.150781 env[1216]: time="2025-08-13T01:01:22.150709818Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:22.243405 env[1216]: time="2025-08-13T01:01:22.243329920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:22.291253 kubelet[1928]: E0813 01:01:22.291202 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:22.312828 env[1216]: time="2025-08-13T01:01:22.312750155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:22.313441 env[1216]: time="2025-08-13T01:01:22.313412302Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:01:22.439175 env[1216]: time="2025-08-13T01:01:22.438970310Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:01:22.476706 env[1216]: time="2025-08-13T01:01:22.476636477Z" level=info msg="CreateContainer within sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:01:23.223733 env[1216]: time="2025-08-13T01:01:23.223654205Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\"" Aug 13 01:01:23.224429 env[1216]: time="2025-08-13T01:01:23.224370659Z" level=info msg="StartContainer for \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\"" Aug 13 01:01:23.244382 systemd[1]: Started cri-containerd-66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25.scope. Aug 13 01:01:23.248513 env[1216]: time="2025-08-13T01:01:23.248446040Z" level=info msg="CreateContainer within sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\"" Aug 13 01:01:23.249076 env[1216]: time="2025-08-13T01:01:23.249029482Z" level=info msg="StartContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\"" Aug 13 01:01:23.275696 systemd[1]: Started cri-containerd-2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba.scope. Aug 13 01:01:23.282642 systemd[1]: cri-containerd-66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25.scope: Deactivated successfully. Aug 13 01:01:23.303370 env[1216]: time="2025-08-13T01:01:23.303315983Z" level=info msg="StartContainer for \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\" returns successfully" Aug 13 01:01:23.328163 env[1216]: time="2025-08-13T01:01:23.328076887Z" level=info msg="StartContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" returns successfully" Aug 13 01:01:23.481974 env[1216]: time="2025-08-13T01:01:23.481731635Z" level=info msg="shim disconnected" id=66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25 Aug 13 01:01:23.481974 env[1216]: time="2025-08-13T01:01:23.481823827Z" level=warning msg="cleaning up after shim disconnected" id=66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25 namespace=k8s.io Aug 13 01:01:23.481974 env[1216]: time="2025-08-13T01:01:23.481843827Z" level=info msg="cleaning up dead shim" Aug 13 01:01:23.493028 env[1216]: time="2025-08-13T01:01:23.492952979Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2593 runtime=io.containerd.runc.v2\n" Aug 13 01:01:23.807187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25-rootfs.mount: Deactivated successfully. Aug 13 01:01:24.330956 kubelet[1928]: E0813 01:01:24.330856 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:24.331718 kubelet[1928]: E0813 01:01:24.330899 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:24.361387 env[1216]: time="2025-08-13T01:01:24.361307585Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:01:24.516054 kubelet[1928]: I0813 01:01:24.515965 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dlmh8" podStartSLOduration=2.902162595 podStartE2EDuration="26.515917157s" podCreationTimestamp="2025-08-13 01:00:58 +0000 UTC" firstStartedPulling="2025-08-13 01:00:58.700516267 +0000 UTC m=+7.348665783" lastFinishedPulling="2025-08-13 01:01:22.314270839 +0000 UTC m=+30.962420345" observedRunningTime="2025-08-13 01:01:24.514357563 +0000 UTC m=+33.162507069" watchObservedRunningTime="2025-08-13 01:01:24.515917157 +0000 UTC m=+33.164066663" Aug 13 01:01:24.559340 env[1216]: time="2025-08-13T01:01:24.559252684Z" level=info msg="CreateContainer within sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\"" Aug 13 01:01:24.560609 env[1216]: time="2025-08-13T01:01:24.560063744Z" level=info msg="StartContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\"" Aug 13 01:01:24.602969 systemd[1]: Started cri-containerd-7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437.scope. Aug 13 01:01:24.720565 env[1216]: time="2025-08-13T01:01:24.720473730Z" level=info msg="StartContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" returns successfully" Aug 13 01:01:24.921354 kubelet[1928]: I0813 01:01:24.921152 1928 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:01:25.378128 kubelet[1928]: E0813 01:01:25.378020 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:25.378641 kubelet[1928]: E0813 01:01:25.378618 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:25.383760 systemd[1]: Created slice kubepods-burstable-pod5d6563ab_722b_4d47_acc0_87f09619ac08.slice. Aug 13 01:01:25.452725 systemd[1]: Created slice kubepods-burstable-podd0e4739d_9b76_4829_8d87_ad2e1eed1ba3.slice. Aug 13 01:01:25.521765 kubelet[1928]: I0813 01:01:25.521668 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gz74w" podStartSLOduration=11.953702815 podStartE2EDuration="27.521651972s" podCreationTimestamp="2025-08-13 01:00:58 +0000 UTC" firstStartedPulling="2025-08-13 01:00:58.454938165 +0000 UTC m=+7.103087681" lastFinishedPulling="2025-08-13 01:01:14.022887332 +0000 UTC m=+22.671036838" observedRunningTime="2025-08-13 01:01:25.521561574 +0000 UTC m=+34.169711090" watchObservedRunningTime="2025-08-13 01:01:25.521651972 +0000 UTC m=+34.169801489" Aug 13 01:01:25.524038 kubelet[1928]: I0813 01:01:25.523995 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d6563ab-722b-4d47-acc0-87f09619ac08-config-volume\") pod \"coredns-674b8bbfcf-pnfrl\" (UID: \"5d6563ab-722b-4d47-acc0-87f09619ac08\") " pod="kube-system/coredns-674b8bbfcf-pnfrl" Aug 13 01:01:25.524121 kubelet[1928]: I0813 01:01:25.524066 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e4739d-9b76-4829-8d87-ad2e1eed1ba3-config-volume\") pod \"coredns-674b8bbfcf-4vxlf\" (UID: \"d0e4739d-9b76-4829-8d87-ad2e1eed1ba3\") " pod="kube-system/coredns-674b8bbfcf-4vxlf" Aug 13 01:01:25.524174 kubelet[1928]: I0813 01:01:25.524153 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v9w\" (UniqueName: \"kubernetes.io/projected/d0e4739d-9b76-4829-8d87-ad2e1eed1ba3-kube-api-access-s2v9w\") pod \"coredns-674b8bbfcf-4vxlf\" (UID: \"d0e4739d-9b76-4829-8d87-ad2e1eed1ba3\") " pod="kube-system/coredns-674b8bbfcf-4vxlf" Aug 13 01:01:25.524206 kubelet[1928]: I0813 01:01:25.524179 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgjn\" (UniqueName: \"kubernetes.io/projected/5d6563ab-722b-4d47-acc0-87f09619ac08-kube-api-access-stgjn\") pod \"coredns-674b8bbfcf-pnfrl\" (UID: \"5d6563ab-722b-4d47-acc0-87f09619ac08\") " pod="kube-system/coredns-674b8bbfcf-pnfrl" Aug 13 01:01:25.686947 kubelet[1928]: E0813 01:01:25.686821 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:25.687537 env[1216]: time="2025-08-13T01:01:25.687492642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pnfrl,Uid:5d6563ab-722b-4d47-acc0-87f09619ac08,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:25.755661 kubelet[1928]: E0813 01:01:25.755610 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:25.756185 env[1216]: time="2025-08-13T01:01:25.756143697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4vxlf,Uid:d0e4739d-9b76-4829-8d87-ad2e1eed1ba3,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:26.334459 kubelet[1928]: E0813 01:01:26.334411 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:27.336433 kubelet[1928]: E0813 01:01:27.336380 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:27.805360 systemd-networkd[1034]: cilium_host: Link UP Aug 13 01:01:27.806930 systemd-networkd[1034]: cilium_net: Link UP Aug 13 01:01:27.807719 systemd-networkd[1034]: cilium_net: Gained carrier Aug 13 01:01:27.808726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 01:01:27.808907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 01:01:27.809345 systemd-networkd[1034]: cilium_host: Gained carrier Aug 13 01:01:27.898311 systemd-networkd[1034]: cilium_vxlan: Link UP Aug 13 01:01:27.898323 systemd-networkd[1034]: cilium_vxlan: Gained carrier Aug 13 01:01:28.118811 kernel: NET: Registered PF_ALG protocol family Aug 13 01:01:28.203245 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:53728.service. Aug 13 01:01:28.239290 sshd[2889]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:28.240387 sshd[2889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:28.244130 systemd-logind[1204]: New session 6 of user core. Aug 13 01:01:28.244932 systemd[1]: Started session-6.scope. Aug 13 01:01:28.353989 systemd-networkd[1034]: cilium_net: Gained IPv6LL Aug 13 01:01:28.402523 sshd[2889]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:28.405529 systemd-logind[1204]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:01:28.405861 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:53728.service: Deactivated successfully. Aug 13 01:01:28.406532 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:01:28.407313 systemd-logind[1204]: Removed session 6. Aug 13 01:01:28.417947 systemd-networkd[1034]: cilium_host: Gained IPv6LL Aug 13 01:01:28.693088 systemd-networkd[1034]: lxc_health: Link UP Aug 13 01:01:28.709298 systemd-networkd[1034]: lxc_health: Gained carrier Aug 13 01:01:28.709806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:01:28.842362 systemd-networkd[1034]: lxc497b077bfb92: Link UP Aug 13 01:01:28.852804 kernel: eth0: renamed from tmpeb187 Aug 13 01:01:28.859063 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:01:28.859113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc497b077bfb92: link becomes ready Aug 13 01:01:28.859274 systemd-networkd[1034]: lxc497b077bfb92: Gained carrier Aug 13 01:01:28.975641 systemd-networkd[1034]: lxc4ec5324994f7: Link UP Aug 13 01:01:28.984826 kernel: eth0: renamed from tmp582e8 Aug 13 01:01:28.991499 systemd-networkd[1034]: lxc4ec5324994f7: Gained carrier Aug 13 01:01:28.991806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4ec5324994f7: link becomes ready Aug 13 01:01:29.953099 systemd-networkd[1034]: cilium_vxlan: Gained IPv6LL Aug 13 01:01:30.080950 systemd-networkd[1034]: lxc_health: Gained IPv6LL Aug 13 01:01:30.365618 kubelet[1928]: E0813 01:01:30.365567 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:30.784930 systemd-networkd[1034]: lxc497b077bfb92: Gained IPv6LL Aug 13 01:01:30.849016 systemd-networkd[1034]: lxc4ec5324994f7: Gained IPv6LL Aug 13 01:01:31.342875 kubelet[1928]: E0813 01:01:31.342836 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:32.344368 kubelet[1928]: E0813 01:01:32.344306 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:32.425760 env[1216]: time="2025-08-13T01:01:32.425481092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:32.425760 env[1216]: time="2025-08-13T01:01:32.425537301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:32.425760 env[1216]: time="2025-08-13T01:01:32.425549285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:32.426335 env[1216]: time="2025-08-13T01:01:32.426013404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05 pid=3190 runtime=io.containerd.runc.v2 Aug 13 01:01:32.431129 env[1216]: time="2025-08-13T01:01:32.427951930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:32.431129 env[1216]: time="2025-08-13T01:01:32.427990636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:32.431129 env[1216]: time="2025-08-13T01:01:32.428000856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:32.431815 env[1216]: time="2025-08-13T01:01:32.431703688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb187e7bd85d7594dbbbb8f499b5d76d51ccf4a24f79b671d932d7d740d08006 pid=3199 runtime=io.containerd.runc.v2 Aug 13 01:01:32.453032 systemd[1]: run-containerd-runc-k8s.io-582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05-runc.yQlDls.mount: Deactivated successfully. Aug 13 01:01:32.459480 systemd[1]: Started cri-containerd-582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05.scope. Aug 13 01:01:32.462965 systemd[1]: Started cri-containerd-eb187e7bd85d7594dbbbb8f499b5d76d51ccf4a24f79b671d932d7d740d08006.scope. Aug 13 01:01:32.475821 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:01:32.482767 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:01:32.505951 env[1216]: time="2025-08-13T01:01:32.505890674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4vxlf,Uid:d0e4739d-9b76-4829-8d87-ad2e1eed1ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05\"" Aug 13 01:01:32.506791 kubelet[1928]: E0813 01:01:32.506743 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:32.511805 env[1216]: time="2025-08-13T01:01:32.511714339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pnfrl,Uid:5d6563ab-722b-4d47-acc0-87f09619ac08,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb187e7bd85d7594dbbbb8f499b5d76d51ccf4a24f79b671d932d7d740d08006\"" Aug 13 01:01:32.512747 kubelet[1928]: E0813 01:01:32.512709 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:32.512961 env[1216]: time="2025-08-13T01:01:32.512935260Z" level=info msg="CreateContainer within sandbox \"582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:01:32.519022 env[1216]: time="2025-08-13T01:01:32.518974587Z" level=info msg="CreateContainer within sandbox \"eb187e7bd85d7594dbbbb8f499b5d76d51ccf4a24f79b671d932d7d740d08006\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:01:32.537986 env[1216]: time="2025-08-13T01:01:32.537935099Z" level=info msg="CreateContainer within sandbox \"eb187e7bd85d7594dbbbb8f499b5d76d51ccf4a24f79b671d932d7d740d08006\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35164422b780e03717eefaba63a81f8a4bc1477819fe22ecfa76de19cf8d48e7\"" Aug 13 01:01:32.538512 env[1216]: time="2025-08-13T01:01:32.538483053Z" level=info msg="StartContainer for \"35164422b780e03717eefaba63a81f8a4bc1477819fe22ecfa76de19cf8d48e7\"" Aug 13 01:01:32.540262 env[1216]: time="2025-08-13T01:01:32.540229844Z" level=info msg="CreateContainer within sandbox \"582e83cc58b1e8fd179f2de7ccaaea71ad703d13ff794dc9f2f035558e5b0e05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"592191bff93cde8e2d531410bc46346b87137f2ee98270aa6e537ee22f400d81\"" Aug 13 01:01:32.541702 env[1216]: time="2025-08-13T01:01:32.541635907Z" level=info msg="StartContainer for \"592191bff93cde8e2d531410bc46346b87137f2ee98270aa6e537ee22f400d81\"" Aug 13 01:01:32.557373 systemd[1]: Started cri-containerd-35164422b780e03717eefaba63a81f8a4bc1477819fe22ecfa76de19cf8d48e7.scope. Aug 13 01:01:32.561248 systemd[1]: Started cri-containerd-592191bff93cde8e2d531410bc46346b87137f2ee98270aa6e537ee22f400d81.scope. Aug 13 01:01:32.592933 env[1216]: time="2025-08-13T01:01:32.592867358Z" level=info msg="StartContainer for \"35164422b780e03717eefaba63a81f8a4bc1477819fe22ecfa76de19cf8d48e7\" returns successfully" Aug 13 01:01:32.594613 env[1216]: time="2025-08-13T01:01:32.594502880Z" level=info msg="StartContainer for \"592191bff93cde8e2d531410bc46346b87137f2ee98270aa6e537ee22f400d81\" returns successfully" Aug 13 01:01:33.346721 kubelet[1928]: E0813 01:01:33.346670 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:33.349195 kubelet[1928]: E0813 01:01:33.349152 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:33.368912 kubelet[1928]: I0813 01:01:33.368842 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4vxlf" podStartSLOduration=35.368825171 podStartE2EDuration="35.368825171s" podCreationTimestamp="2025-08-13 01:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:33.367827648 +0000 UTC m=+42.015977174" watchObservedRunningTime="2025-08-13 01:01:33.368825171 +0000 UTC m=+42.016974677" Aug 13 01:01:33.409128 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:59230.service. Aug 13 01:01:33.446552 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:33.447815 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:33.452006 systemd-logind[1204]: New session 7 of user core. Aug 13 01:01:33.453323 systemd[1]: Started session-7.scope. Aug 13 01:01:33.715896 sshd[3341]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:33.718239 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:59230.service: Deactivated successfully. Aug 13 01:01:33.719256 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:01:33.719840 systemd-logind[1204]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:01:33.720575 systemd-logind[1204]: Removed session 7. Aug 13 01:01:34.350934 kubelet[1928]: E0813 01:01:34.350900 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:34.351394 kubelet[1928]: E0813 01:01:34.350958 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:35.352314 kubelet[1928]: E0813 01:01:35.352260 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:35.352314 kubelet[1928]: E0813 01:01:35.352300 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:01:38.721586 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:59232.service. Aug 13 01:01:38.751307 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 59232 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:38.752724 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:38.756833 systemd-logind[1204]: New session 8 of user core. Aug 13 01:01:38.757930 systemd[1]: Started session-8.scope. Aug 13 01:01:38.875522 sshd[3360]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:38.877738 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:59232.service: Deactivated successfully. Aug 13 01:01:38.878541 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:01:38.879523 systemd-logind[1204]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:01:38.880315 systemd-logind[1204]: Removed session 8. Aug 13 01:01:43.880628 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:42982.service. Aug 13 01:01:43.912203 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 42982 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:43.913335 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:43.917012 systemd-logind[1204]: New session 9 of user core. Aug 13 01:01:43.917756 systemd[1]: Started session-9.scope. Aug 13 01:01:44.075442 sshd[3374]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:44.078328 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:42982.service: Deactivated successfully. Aug 13 01:01:44.079153 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:01:44.080084 systemd-logind[1204]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:01:44.081000 systemd-logind[1204]: Removed session 9. Aug 13 01:01:49.081717 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:42986.service. Aug 13 01:01:49.111874 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:49.113274 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:49.117738 systemd-logind[1204]: New session 10 of user core. Aug 13 01:01:49.118790 systemd[1]: Started session-10.scope. Aug 13 01:01:49.299520 sshd[3388]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:49.302990 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:42986.service: Deactivated successfully. Aug 13 01:01:49.303535 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:01:49.304203 systemd-logind[1204]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:01:49.305397 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:43002.service. Aug 13 01:01:49.306210 systemd-logind[1204]: Removed session 10. Aug 13 01:01:49.336603 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:49.338247 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:49.343853 systemd-logind[1204]: New session 11 of user core. Aug 13 01:01:49.344786 systemd[1]: Started session-11.scope. Aug 13 01:01:49.770807 sshd[3402]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:49.773810 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:43002.service: Deactivated successfully. Aug 13 01:01:49.774319 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:01:49.774910 systemd-logind[1204]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:01:49.776222 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:43004.service. Aug 13 01:01:49.776924 systemd-logind[1204]: Removed session 11. Aug 13 01:01:49.805370 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:49.806858 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:49.810332 systemd-logind[1204]: New session 12 of user core. Aug 13 01:01:49.811148 systemd[1]: Started session-12.scope. Aug 13 01:01:49.987374 sshd[3413]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:49.989740 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:43004.service: Deactivated successfully. Aug 13 01:01:49.990400 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:01:49.990884 systemd-logind[1204]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:01:49.991618 systemd-logind[1204]: Removed session 12. Aug 13 01:01:54.992730 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:45142.service. Aug 13 01:01:55.022061 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 45142 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:01:55.023336 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:55.026791 systemd-logind[1204]: New session 13 of user core. Aug 13 01:01:55.027623 systemd[1]: Started session-13.scope. Aug 13 01:01:55.130383 sshd[3428]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:55.132649 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:45142.service: Deactivated successfully. Aug 13 01:01:55.133323 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:01:55.133911 systemd-logind[1204]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:01:55.134578 systemd-logind[1204]: Removed session 13. Aug 13 01:02:00.135429 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:49586.service. Aug 13 01:02:00.166520 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 49586 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:00.167885 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:00.171641 systemd-logind[1204]: New session 14 of user core. Aug 13 01:02:00.172410 systemd[1]: Started session-14.scope. Aug 13 01:02:00.285224 sshd[3444]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:00.287732 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:49586.service: Deactivated successfully. Aug 13 01:02:00.288596 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:02:00.289391 systemd-logind[1204]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:02:00.290055 systemd-logind[1204]: Removed session 14. Aug 13 01:02:05.290301 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:49594.service. Aug 13 01:02:05.320507 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 49594 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:05.322354 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:05.326753 systemd-logind[1204]: New session 15 of user core. Aug 13 01:02:05.327972 systemd[1]: Started session-15.scope. Aug 13 01:02:05.439348 sshd[3458]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:05.442705 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:49594.service: Deactivated successfully. Aug 13 01:02:05.443327 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:02:05.443858 systemd-logind[1204]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:02:05.445197 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:49608.service. Aug 13 01:02:05.446118 systemd-logind[1204]: Removed session 15. Aug 13 01:02:05.475578 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 49608 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:05.476735 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:05.480482 systemd-logind[1204]: New session 16 of user core. Aug 13 01:02:05.481469 systemd[1]: Started session-16.scope. Aug 13 01:02:05.848118 sshd[3472]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:05.851437 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:49608.service: Deactivated successfully. Aug 13 01:02:05.852143 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:02:05.852849 systemd-logind[1204]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:02:05.854303 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:49620.service. Aug 13 01:02:05.855397 systemd-logind[1204]: Removed session 16. Aug 13 01:02:05.886419 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 49620 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:05.887854 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:05.891579 systemd-logind[1204]: New session 17 of user core. Aug 13 01:02:05.892388 systemd[1]: Started session-17.scope. Aug 13 01:02:06.383337 sshd[3484]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:06.387206 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:49624.service. Aug 13 01:02:06.387817 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:49620.service: Deactivated successfully. Aug 13 01:02:06.388515 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:02:06.390316 systemd-logind[1204]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:02:06.391514 systemd-logind[1204]: Removed session 17. Aug 13 01:02:06.420597 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 49624 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:06.421730 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:06.425439 systemd-logind[1204]: New session 18 of user core. Aug 13 01:02:06.426399 systemd[1]: Started session-18.scope. Aug 13 01:02:06.452533 kubelet[1928]: E0813 01:02:06.452504 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:06.792813 sshd[3500]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:06.797226 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:49634.service. Aug 13 01:02:06.797746 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:49624.service: Deactivated successfully. Aug 13 01:02:06.798316 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:02:06.799000 systemd-logind[1204]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:02:06.800133 systemd-logind[1204]: Removed session 18. Aug 13 01:02:06.830755 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 49634 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:06.832075 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:06.835269 systemd-logind[1204]: New session 19 of user core. Aug 13 01:02:06.836180 systemd[1]: Started session-19.scope. Aug 13 01:02:06.941804 sshd[3514]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:06.944507 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:49634.service: Deactivated successfully. Aug 13 01:02:06.945375 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:02:06.945944 systemd-logind[1204]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:02:06.946716 systemd-logind[1204]: Removed session 19. Aug 13 01:02:10.452971 kubelet[1928]: E0813 01:02:10.452906 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:11.947106 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:56574.service. Aug 13 01:02:11.977485 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:11.979101 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:11.983097 systemd-logind[1204]: New session 20 of user core. Aug 13 01:02:11.984019 systemd[1]: Started session-20.scope. Aug 13 01:02:12.091962 sshd[3528]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:12.094346 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:56574.service: Deactivated successfully. Aug 13 01:02:12.095103 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:02:12.095701 systemd-logind[1204]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:02:12.096523 systemd-logind[1204]: Removed session 20. Aug 13 01:02:17.096601 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:56580.service. Aug 13 01:02:17.125570 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 56580 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:17.126606 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:17.129725 systemd-logind[1204]: New session 21 of user core. Aug 13 01:02:17.130526 systemd[1]: Started session-21.scope. Aug 13 01:02:17.231667 sshd[3543]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:17.233723 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:56580.service: Deactivated successfully. Aug 13 01:02:17.234404 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:02:17.234967 systemd-logind[1204]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:02:17.235600 systemd-logind[1204]: Removed session 21. Aug 13 01:02:19.453479 kubelet[1928]: E0813 01:02:19.453404 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:22.236973 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:32838.service. Aug 13 01:02:22.265935 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 32838 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:22.267191 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:22.270620 systemd-logind[1204]: New session 22 of user core. Aug 13 01:02:22.271365 systemd[1]: Started session-22.scope. Aug 13 01:02:22.375221 sshd[3558]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:22.377934 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:32838.service: Deactivated successfully. Aug 13 01:02:22.378710 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:02:22.379423 systemd-logind[1204]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:02:22.380129 systemd-logind[1204]: Removed session 22. Aug 13 01:02:27.383633 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:32844.service. Aug 13 01:02:27.430426 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:27.432719 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:27.451878 systemd-logind[1204]: New session 23 of user core. Aug 13 01:02:27.453355 systemd[1]: Started session-23.scope. Aug 13 01:02:27.648095 sshd[3572]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:27.652703 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:32844.service: Deactivated successfully. Aug 13 01:02:27.656198 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:02:27.657548 systemd-logind[1204]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:02:27.665641 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:32846.service. Aug 13 01:02:27.666971 systemd-logind[1204]: Removed session 23. Aug 13 01:02:27.716324 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:27.720918 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:27.726105 systemd-logind[1204]: New session 24 of user core. Aug 13 01:02:27.727149 systemd[1]: Started session-24.scope. Aug 13 01:02:28.453278 kubelet[1928]: E0813 01:02:28.453243 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:29.944866 kubelet[1928]: I0813 01:02:29.944792 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pnfrl" podStartSLOduration=91.944760967 podStartE2EDuration="1m31.944760967s" podCreationTimestamp="2025-08-13 01:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:33.396342417 +0000 UTC m=+42.044491913" watchObservedRunningTime="2025-08-13 01:02:29.944760967 +0000 UTC m=+98.592910473" Aug 13 01:02:29.961470 systemd[1]: run-containerd-runc-k8s.io-7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437-runc.Vak2NV.mount: Deactivated successfully. Aug 13 01:02:29.987699 env[1216]: time="2025-08-13T01:02:29.987619522Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:02:29.993074 env[1216]: time="2025-08-13T01:02:29.993030650Z" level=info msg="StopContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" with timeout 2 (s)" Aug 13 01:02:29.993268 env[1216]: time="2025-08-13T01:02:29.993244759Z" level=info msg="Stop container \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" with signal terminated" Aug 13 01:02:29.999851 systemd-networkd[1034]: lxc_health: Link DOWN Aug 13 01:02:29.999872 systemd-networkd[1034]: lxc_health: Lost carrier Aug 13 01:02:30.030057 env[1216]: time="2025-08-13T01:02:30.029984831Z" level=info msg="StopContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" with timeout 30 (s)" Aug 13 01:02:30.030413 env[1216]: time="2025-08-13T01:02:30.030381500Z" level=info msg="Stop container \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" with signal terminated" Aug 13 01:02:30.038079 systemd[1]: cri-containerd-2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba.scope: Deactivated successfully. Aug 13 01:02:30.039183 systemd[1]: cri-containerd-7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437.scope: Deactivated successfully. Aug 13 01:02:30.039443 systemd[1]: cri-containerd-7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437.scope: Consumed 6.705s CPU time. Aug 13 01:02:30.054907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437-rootfs.mount: Deactivated successfully. Aug 13 01:02:30.059557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba-rootfs.mount: Deactivated successfully. Aug 13 01:02:30.361984 env[1216]: time="2025-08-13T01:02:30.361925928Z" level=info msg="shim disconnected" id=2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba Aug 13 01:02:30.361984 env[1216]: time="2025-08-13T01:02:30.361982937Z" level=warning msg="cleaning up after shim disconnected" id=2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba namespace=k8s.io Aug 13 01:02:30.361984 env[1216]: time="2025-08-13T01:02:30.361994930Z" level=info msg="cleaning up dead shim" Aug 13 01:02:30.362261 env[1216]: time="2025-08-13T01:02:30.361926068Z" level=info msg="shim disconnected" id=7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437 Aug 13 01:02:30.362261 env[1216]: time="2025-08-13T01:02:30.362028274Z" level=warning msg="cleaning up after shim disconnected" id=7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437 namespace=k8s.io Aug 13 01:02:30.362261 env[1216]: time="2025-08-13T01:02:30.362037662Z" level=info msg="cleaning up dead shim" Aug 13 01:02:30.369804 env[1216]: time="2025-08-13T01:02:30.369725557Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3657 runtime=io.containerd.runc.v2\n" Aug 13 01:02:30.370283 env[1216]: time="2025-08-13T01:02:30.370249068Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3656 runtime=io.containerd.runc.v2\n" Aug 13 01:02:30.426848 env[1216]: time="2025-08-13T01:02:30.426749783Z" level=info msg="StopContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" returns successfully" Aug 13 01:02:30.427541 env[1216]: time="2025-08-13T01:02:30.427473157Z" level=info msg="StopPodSandbox for \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\"" Aug 13 01:02:30.427614 env[1216]: time="2025-08-13T01:02:30.427550535Z" level=info msg="Container to stop \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.427614 env[1216]: time="2025-08-13T01:02:30.427568209Z" level=info msg="Container to stop \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.427614 env[1216]: time="2025-08-13T01:02:30.427580472Z" level=info msg="Container to stop \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.427614 env[1216]: time="2025-08-13T01:02:30.427592535Z" level=info msg="Container to stop \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.427614 env[1216]: time="2025-08-13T01:02:30.427604418Z" level=info msg="Container to stop \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.428501 env[1216]: time="2025-08-13T01:02:30.428447601Z" level=info msg="StopContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" returns successfully" Aug 13 01:02:30.429154 env[1216]: time="2025-08-13T01:02:30.429112612Z" level=info msg="StopPodSandbox for \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\"" Aug 13 01:02:30.429236 env[1216]: time="2025-08-13T01:02:30.429206292Z" level=info msg="Container to stop \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:02:30.435339 systemd[1]: cri-containerd-0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3.scope: Deactivated successfully. Aug 13 01:02:30.437998 systemd[1]: cri-containerd-44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8.scope: Deactivated successfully. Aug 13 01:02:30.957663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8-rootfs.mount: Deactivated successfully. Aug 13 01:02:30.957793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8-shm.mount: Deactivated successfully. Aug 13 01:02:30.957856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3-rootfs.mount: Deactivated successfully. Aug 13 01:02:30.957907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3-shm.mount: Deactivated successfully. Aug 13 01:02:31.046471 env[1216]: time="2025-08-13T01:02:31.046396347Z" level=info msg="shim disconnected" id=44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8 Aug 13 01:02:31.046471 env[1216]: time="2025-08-13T01:02:31.046454630Z" level=warning msg="cleaning up after shim disconnected" id=44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8 namespace=k8s.io Aug 13 01:02:31.046471 env[1216]: time="2025-08-13T01:02:31.046466091Z" level=info msg="cleaning up dead shim" Aug 13 01:02:31.046993 env[1216]: time="2025-08-13T01:02:31.046398161Z" level=info msg="shim disconnected" id=0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3 Aug 13 01:02:31.046993 env[1216]: time="2025-08-13T01:02:31.046724145Z" level=warning msg="cleaning up after shim disconnected" id=0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3 namespace=k8s.io Aug 13 01:02:31.046993 env[1216]: time="2025-08-13T01:02:31.046739184Z" level=info msg="cleaning up dead shim" Aug 13 01:02:31.056615 env[1216]: time="2025-08-13T01:02:31.056545256Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\n" Aug 13 01:02:31.056819 env[1216]: time="2025-08-13T01:02:31.056577788Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3721 runtime=io.containerd.runc.v2\n" Aug 13 01:02:31.056987 env[1216]: time="2025-08-13T01:02:31.056954910Z" level=info msg="TearDown network for sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" successfully" Aug 13 01:02:31.057044 env[1216]: time="2025-08-13T01:02:31.056977533Z" level=info msg="TearDown network for sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" successfully" Aug 13 01:02:31.057044 env[1216]: time="2025-08-13T01:02:31.056988384Z" level=info msg="StopPodSandbox for \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" returns successfully" Aug 13 01:02:31.057044 env[1216]: time="2025-08-13T01:02:31.057003914Z" level=info msg="StopPodSandbox for \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" returns successfully" Aug 13 01:02:31.143966 kubelet[1928]: I0813 01:02:31.143928 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-bpf-maps\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.143966 kubelet[1928]: I0813 01:02:31.143968 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-config-path\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.143966 kubelet[1928]: I0813 01:02:31.143981 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-net\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.143997 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x99pl\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-kube-api-access-x99pl\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.144020 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-etc-cni-netd\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.144048 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-hostproc\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.144071 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cni-path\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.144091 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4hjd\" (UniqueName: \"kubernetes.io/projected/af8768f7-f493-4b32-96af-bc4f16fe8d10-kube-api-access-v4hjd\") pod \"af8768f7-f493-4b32-96af-bc4f16fe8d10\" (UID: \"af8768f7-f493-4b32-96af-bc4f16fe8d10\") " Aug 13 01:02:31.144552 kubelet[1928]: I0813 01:02:31.144322 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.144732 kubelet[1928]: I0813 01:02:31.144371 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.144732 kubelet[1928]: I0813 01:02:31.144392 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-hostproc" (OuterVolumeSpecName: "hostproc") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.144732 kubelet[1928]: I0813 01:02:31.144406 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.144732 kubelet[1928]: I0813 01:02:31.144418 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cni-path" (OuterVolumeSpecName: "cni-path") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.144732 kubelet[1928]: I0813 01:02:31.144518 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-run\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144546 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-lib-modules\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144572 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-hubble-tls\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144599 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af8768f7-f493-4b32-96af-bc4f16fe8d10-cilium-config-path\") pod \"af8768f7-f493-4b32-96af-bc4f16fe8d10\" (UID: \"af8768f7-f493-4b32-96af-bc4f16fe8d10\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144619 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-cgroup\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144630 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-xtables-lock\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.144901 kubelet[1928]: I0813 01:02:31.144647 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-kernel\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144669 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fb65264-0805-48a0-8cae-9da614d07b43-clustermesh-secrets\") pod \"8fb65264-0805-48a0-8cae-9da614d07b43\" (UID: \"8fb65264-0805-48a0-8cae-9da614d07b43\") " Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144707 1928 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144718 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144729 1928 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144740 1928 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.145081 kubelet[1928]: I0813 01:02:31.144749 1928 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.175037 kubelet[1928]: I0813 01:02:31.174731 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.175037 kubelet[1928]: I0813 01:02:31.174766 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.177169 kubelet[1928]: I0813 01:02:31.177133 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:02:31.178078 kubelet[1928]: I0813 01:02:31.178029 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb65264-0805-48a0-8cae-9da614d07b43-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:02:31.178169 kubelet[1928]: I0813 01:02:31.178089 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.179621 kubelet[1928]: I0813 01:02:31.179125 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:02:31.179621 kubelet[1928]: I0813 01:02:31.179167 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.179621 kubelet[1928]: I0813 01:02:31.179185 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:31.179621 kubelet[1928]: I0813 01:02:31.179536 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-kube-api-access-x99pl" (OuterVolumeSpecName: "kube-api-access-x99pl") pod "8fb65264-0805-48a0-8cae-9da614d07b43" (UID: "8fb65264-0805-48a0-8cae-9da614d07b43"). InnerVolumeSpecName "kube-api-access-x99pl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:02:31.180026 systemd[1]: var-lib-kubelet-pods-af8768f7\x2df493\x2d4b32\x2d96af\x2dbc4f16fe8d10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4hjd.mount: Deactivated successfully. Aug 13 01:02:31.180176 systemd[1]: var-lib-kubelet-pods-8fb65264\x2d0805\x2d48a0\x2d8cae\x2d9da614d07b43-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:02:31.180283 systemd[1]: var-lib-kubelet-pods-8fb65264\x2d0805\x2d48a0\x2d8cae\x2d9da614d07b43-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:02:31.180878 kubelet[1928]: I0813 01:02:31.180567 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8768f7-f493-4b32-96af-bc4f16fe8d10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af8768f7-f493-4b32-96af-bc4f16fe8d10" (UID: "af8768f7-f493-4b32-96af-bc4f16fe8d10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:02:31.180878 kubelet[1928]: I0813 01:02:31.180661 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8768f7-f493-4b32-96af-bc4f16fe8d10-kube-api-access-v4hjd" (OuterVolumeSpecName: "kube-api-access-v4hjd") pod "af8768f7-f493-4b32-96af-bc4f16fe8d10" (UID: "af8768f7-f493-4b32-96af-bc4f16fe8d10"). InnerVolumeSpecName "kube-api-access-v4hjd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:02:31.183321 systemd[1]: var-lib-kubelet-pods-8fb65264\x2d0805\x2d48a0\x2d8cae\x2d9da614d07b43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx99pl.mount: Deactivated successfully. Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245900 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af8768f7-f493-4b32-96af-bc4f16fe8d10-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245927 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245935 1928 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245942 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245949 1928 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fb65264-0805-48a0-8cae-9da614d07b43-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245957 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245965 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x99pl\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-kube-api-access-x99pl\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.245987 kubelet[1928]: I0813 01:02:31.245972 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v4hjd\" (UniqueName: \"kubernetes.io/projected/af8768f7-f493-4b32-96af-bc4f16fe8d10-kube-api-access-v4hjd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.246328 kubelet[1928]: I0813 01:02:31.245979 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.246328 kubelet[1928]: I0813 01:02:31.245985 1928 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fb65264-0805-48a0-8cae-9da614d07b43-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.246328 kubelet[1928]: I0813 01:02:31.245993 1928 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fb65264-0805-48a0-8cae-9da614d07b43-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:31.456375 kubelet[1928]: E0813 01:02:31.456322 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:31.460439 systemd[1]: Removed slice kubepods-besteffort-podaf8768f7_f493_4b32_96af_bc4f16fe8d10.slice. Aug 13 01:02:31.464174 systemd[1]: Removed slice kubepods-burstable-pod8fb65264_0805_48a0_8cae_9da614d07b43.slice. Aug 13 01:02:31.464259 systemd[1]: kubepods-burstable-pod8fb65264_0805_48a0_8cae_9da614d07b43.slice: Consumed 6.804s CPU time. Aug 13 01:02:31.479974 kubelet[1928]: I0813 01:02:31.479844 1928 scope.go:117] "RemoveContainer" containerID="7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437" Aug 13 01:02:31.481434 env[1216]: time="2025-08-13T01:02:31.481382936Z" level=info msg="RemoveContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\"" Aug 13 01:02:31.496664 sshd[3585]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:31.501153 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:54834.service. Aug 13 01:02:31.501638 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:32846.service: Deactivated successfully. Aug 13 01:02:31.502458 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:02:31.502698 kubelet[1928]: E0813 01:02:31.502596 1928 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:02:31.503175 systemd-logind[1204]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:02:31.504579 systemd-logind[1204]: Removed session 24. Aug 13 01:02:31.532364 env[1216]: time="2025-08-13T01:02:31.532305158Z" level=info msg="RemoveContainer for \"7713b352369b6d23f2cca39b2693f78b8860e9bd2b95bdd1845c9ea9852dc437\" returns successfully" Aug 13 01:02:31.532725 kubelet[1928]: I0813 01:02:31.532675 1928 scope.go:117] "RemoveContainer" containerID="66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25" Aug 13 01:02:31.534329 env[1216]: time="2025-08-13T01:02:31.534289284Z" level=info msg="RemoveContainer for \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\"" Aug 13 01:02:31.536269 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:31.538745 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:31.543641 systemd-logind[1204]: New session 25 of user core. Aug 13 01:02:31.544648 systemd[1]: Started session-25.scope. Aug 13 01:02:31.591337 env[1216]: time="2025-08-13T01:02:31.591268305Z" level=info msg="RemoveContainer for \"66f8b59ff92af4a734f11253acf448f5c2e9251c833d7e61d6c99ad564f08a25\" returns successfully" Aug 13 01:02:31.591549 kubelet[1928]: I0813 01:02:31.591521 1928 scope.go:117] "RemoveContainer" containerID="1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48" Aug 13 01:02:31.593085 env[1216]: time="2025-08-13T01:02:31.593053119Z" level=info msg="RemoveContainer for \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\"" Aug 13 01:02:31.658621 env[1216]: time="2025-08-13T01:02:31.658493289Z" level=info msg="RemoveContainer for \"1d6ba6806ed3726b24418e4beb26fb6e0e12afa4ab688c83cfc9390f647dcf48\" returns successfully" Aug 13 01:02:31.659055 kubelet[1928]: I0813 01:02:31.658998 1928 scope.go:117] "RemoveContainer" containerID="639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44" Aug 13 01:02:31.660410 env[1216]: time="2025-08-13T01:02:31.660367675Z" level=info msg="RemoveContainer for \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\"" Aug 13 01:02:31.724467 env[1216]: time="2025-08-13T01:02:31.724335949Z" level=info msg="RemoveContainer for \"639b36df875dfdabc4cc7ea4d0547a44d00f7f60802b93814111d5f2019c4c44\" returns successfully" Aug 13 01:02:31.724731 kubelet[1928]: I0813 01:02:31.724609 1928 scope.go:117] "RemoveContainer" containerID="ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888" Aug 13 01:02:31.725762 env[1216]: time="2025-08-13T01:02:31.725715388Z" level=info msg="RemoveContainer for \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\"" Aug 13 01:02:31.784418 env[1216]: time="2025-08-13T01:02:31.784279491Z" level=info msg="RemoveContainer for \"ecf264d418836e3af07377837a37b9ed1a674b012103ef31ce06958191ac1888\" returns successfully" Aug 13 01:02:31.784618 kubelet[1928]: I0813 01:02:31.784577 1928 scope.go:117] "RemoveContainer" containerID="2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba" Aug 13 01:02:31.786281 env[1216]: time="2025-08-13T01:02:31.786206027Z" level=info msg="RemoveContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\"" Aug 13 01:02:31.840111 env[1216]: time="2025-08-13T01:02:31.840062463Z" level=info msg="RemoveContainer for \"2264f3072206b9840d014ed1b81715b0d956a0053c7168b6cd7b1bdad5d8c8ba\" returns successfully" Aug 13 01:02:32.983917 sshd[3749]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:32.987637 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:54834.service: Deactivated successfully. Aug 13 01:02:32.988378 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:02:32.989271 systemd-logind[1204]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:02:32.991199 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:54840.service. Aug 13 01:02:32.993094 systemd-logind[1204]: Removed session 25. Aug 13 01:02:33.025737 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 54840 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:33.027186 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:33.030901 systemd-logind[1204]: New session 26 of user core. Aug 13 01:02:33.031721 systemd[1]: Started session-26.scope. Aug 13 01:02:33.096392 systemd[1]: Created slice kubepods-burstable-pod3f9774fe_537b_4656_b5b3_2981f3e7c430.slice. Aug 13 01:02:33.157103 kubelet[1928]: I0813 01:02:33.157059 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-cgroup\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157609 kubelet[1928]: I0813 01:02:33.157575 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-clustermesh-secrets\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157735 kubelet[1928]: I0813 01:02:33.157715 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-net\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157820 kubelet[1928]: I0813 01:02:33.157742 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-etc-cni-netd\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157820 kubelet[1928]: I0813 01:02:33.157755 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-xtables-lock\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157820 kubelet[1928]: I0813 01:02:33.157781 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-config-path\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157903 kubelet[1928]: I0813 01:02:33.157854 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-hostproc\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157933 kubelet[1928]: I0813 01:02:33.157907 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cni-path\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157958 kubelet[1928]: I0813 01:02:33.157930 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-hubble-tls\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.157958 kubelet[1928]: I0813 01:02:33.157952 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbglm\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-kube-api-access-kbglm\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.158013 kubelet[1928]: I0813 01:02:33.157970 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-bpf-maps\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.158013 kubelet[1928]: I0813 01:02:33.157990 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-kernel\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.158061 kubelet[1928]: I0813 01:02:33.158013 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-run\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.158061 kubelet[1928]: I0813 01:02:33.158030 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-lib-modules\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.158061 kubelet[1928]: I0813 01:02:33.158049 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-ipsec-secrets\") pod \"cilium-q8jxc\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " pod="kube-system/cilium-q8jxc" Aug 13 01:02:33.183176 sshd[3762]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:33.186602 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:54840.service: Deactivated successfully. Aug 13 01:02:33.187405 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:02:33.188528 systemd-logind[1204]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:02:33.190527 systemd[1]: Started sshd@26-10.0.0.83:22-10.0.0.1:54848.service. Aug 13 01:02:33.191543 systemd-logind[1204]: Removed session 26. Aug 13 01:02:33.220706 sshd[3775]: Accepted publickey for core from 10.0.0.1 port 54848 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 01:02:33.222265 sshd[3775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:33.225442 systemd-logind[1204]: New session 27 of user core. Aug 13 01:02:33.226451 systemd[1]: Started session-27.scope. Aug 13 01:02:33.238716 kubelet[1928]: E0813 01:02:33.238564 1928 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-kbglm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-q8jxc" podUID="3f9774fe-537b-4656-b5b3-2981f3e7c430" Aug 13 01:02:33.454943 kubelet[1928]: I0813 01:02:33.454903 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb65264-0805-48a0-8cae-9da614d07b43" path="/var/lib/kubelet/pods/8fb65264-0805-48a0-8cae-9da614d07b43/volumes" Aug 13 01:02:33.455466 kubelet[1928]: I0813 01:02:33.455443 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8768f7-f493-4b32-96af-bc4f16fe8d10" path="/var/lib/kubelet/pods/af8768f7-f493-4b32-96af-bc4f16fe8d10/volumes" Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561744 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-etc-cni-netd\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561813 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-lib-modules\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561831 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cni-path\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561854 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-bpf-maps\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561881 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-config-path\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.561954 kubelet[1928]: I0813 01:02:33.561901 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-cgroup\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.562309 kubelet[1928]: I0813 01:02:33.561922 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-clustermesh-secrets\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.562309 kubelet[1928]: I0813 01:02:33.561910 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562309 kubelet[1928]: I0813 01:02:33.561940 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-xtables-lock\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.562309 kubelet[1928]: I0813 01:02:33.561960 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-ipsec-secrets\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.562309 kubelet[1928]: I0813 01:02:33.561971 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cni-path" (OuterVolumeSpecName: "cni-path") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562550 kubelet[1928]: I0813 01:02:33.562037 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562550 kubelet[1928]: I0813 01:02:33.562061 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562697 kubelet[1928]: I0813 01:02:33.562665 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562858 kubelet[1928]: I0813 01:02:33.562833 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.562989 kubelet[1928]: I0813 01:02:33.562967 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-kernel\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563100 kubelet[1928]: I0813 01:02:33.563078 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-net\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563249 kubelet[1928]: I0813 01:02:33.563217 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-hostproc\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563369 kubelet[1928]: I0813 01:02:33.563347 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-hubble-tls\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563483 kubelet[1928]: I0813 01:02:33.563461 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbglm\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-kube-api-access-kbglm\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563593 kubelet[1928]: I0813 01:02:33.563571 1928 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-run\") pod \"3f9774fe-537b-4656-b5b3-2981f3e7c430\" (UID: \"3f9774fe-537b-4656-b5b3-2981f3e7c430\") " Aug 13 01:02:33.563722 kubelet[1928]: I0813 01:02:33.563684 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:02:33.563948 kubelet[1928]: I0813 01:02:33.563733 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.564042 kubelet[1928]: I0813 01:02:33.563748 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.564123 kubelet[1928]: I0813 01:02:33.563761 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.564199 kubelet[1928]: I0813 01:02:33.563791 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-hostproc" (OuterVolumeSpecName: "hostproc") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:02:33.564287 kubelet[1928]: I0813 01:02:33.563709 1928 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.564387 kubelet[1928]: I0813 01:02:33.564370 1928 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.564468 kubelet[1928]: I0813 01:02:33.564452 1928 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.564549 kubelet[1928]: I0813 01:02:33.564533 1928 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.564632 kubelet[1928]: I0813 01:02:33.564616 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.564711 kubelet[1928]: I0813 01:02:33.564696 1928 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.566761 systemd[1]: var-lib-kubelet-pods-3f9774fe\x2d537b\x2d4656\x2db5b3\x2d2981f3e7c430-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:02:33.566928 systemd[1]: var-lib-kubelet-pods-3f9774fe\x2d537b\x2d4656\x2db5b3\x2d2981f3e7c430-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 01:02:33.567473 kubelet[1928]: I0813 01:02:33.567424 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:02:33.568957 kubelet[1928]: I0813 01:02:33.568926 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:02:33.569394 kubelet[1928]: I0813 01:02:33.569369 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:02:33.569509 kubelet[1928]: I0813 01:02:33.569358 1928 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-kube-api-access-kbglm" (OuterVolumeSpecName: "kube-api-access-kbglm") pod "3f9774fe-537b-4656-b5b3-2981f3e7c430" (UID: "3f9774fe-537b-4656-b5b3-2981f3e7c430"). InnerVolumeSpecName "kube-api-access-kbglm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:02:33.569595 systemd[1]: var-lib-kubelet-pods-3f9774fe\x2d537b\x2d4656\x2db5b3\x2d2981f3e7c430-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkbglm.mount: Deactivated successfully. Aug 13 01:02:33.569689 systemd[1]: var-lib-kubelet-pods-3f9774fe\x2d537b\x2d4656\x2db5b3\x2d2981f3e7c430-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665714 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665761 1928 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665786 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665794 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665804 1928 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665813 1928 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.665808 kubelet[1928]: I0813 01:02:33.665820 1928 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.666166 kubelet[1928]: I0813 01:02:33.665829 1928 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kbglm\" (UniqueName: \"kubernetes.io/projected/3f9774fe-537b-4656-b5b3-2981f3e7c430-kube-api-access-kbglm\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.666166 kubelet[1928]: I0813 01:02:33.665867 1928 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f9774fe-537b-4656-b5b3-2981f3e7c430-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 01:02:33.794624 kubelet[1928]: I0813 01:02:33.794566 1928 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:02:33Z","lastTransitionTime":"2025-08-13T01:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:02:34.494587 systemd[1]: Removed slice kubepods-burstable-pod3f9774fe_537b_4656_b5b3_2981f3e7c430.slice. Aug 13 01:02:34.817217 systemd[1]: Created slice kubepods-burstable-pod7c658f51_5027_458a_a42a_dded78ad5652.slice. Aug 13 01:02:34.872425 kubelet[1928]: I0813 01:02:34.872349 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-host-proc-sys-net\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.872425 kubelet[1928]: I0813 01:02:34.872398 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-cilium-run\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.872425 kubelet[1928]: I0813 01:02:34.872413 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-etc-cni-netd\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.872425 kubelet[1928]: I0813 01:02:34.872426 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c658f51-5027-458a-a42a-dded78ad5652-clustermesh-secrets\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.872425 kubelet[1928]: I0813 01:02:34.872443 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-bpf-maps\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872509 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-hostproc\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872564 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c658f51-5027-458a-a42a-dded78ad5652-cilium-config-path\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872586 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c658f51-5027-458a-a42a-dded78ad5652-hubble-tls\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872601 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c658f51-5027-458a-a42a-dded78ad5652-cilium-ipsec-secrets\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872642 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4ph\" (UniqueName: \"kubernetes.io/projected/7c658f51-5027-458a-a42a-dded78ad5652-kube-api-access-qj4ph\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873054 kubelet[1928]: I0813 01:02:34.872676 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-cilium-cgroup\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873262 kubelet[1928]: I0813 01:02:34.872734 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-lib-modules\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873262 kubelet[1928]: I0813 01:02:34.872758 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-xtables-lock\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873262 kubelet[1928]: I0813 01:02:34.872786 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-host-proc-sys-kernel\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:34.873262 kubelet[1928]: I0813 01:02:34.872807 1928 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c658f51-5027-458a-a42a-dded78ad5652-cni-path\") pod \"cilium-4llqf\" (UID: \"7c658f51-5027-458a-a42a-dded78ad5652\") " pod="kube-system/cilium-4llqf" Aug 13 01:02:35.120033 kubelet[1928]: E0813 01:02:35.119965 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:35.120554 env[1216]: time="2025-08-13T01:02:35.120507653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4llqf,Uid:7c658f51-5027-458a-a42a-dded78ad5652,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:35.135295 env[1216]: time="2025-08-13T01:02:35.135195005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:02:35.135295 env[1216]: time="2025-08-13T01:02:35.135278675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:02:35.135295 env[1216]: time="2025-08-13T01:02:35.135304254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:02:35.135580 env[1216]: time="2025-08-13T01:02:35.135516370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd pid=3807 runtime=io.containerd.runc.v2 Aug 13 01:02:35.147446 systemd[1]: Started cri-containerd-ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd.scope. Aug 13 01:02:35.171592 env[1216]: time="2025-08-13T01:02:35.171549175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4llqf,Uid:7c658f51-5027-458a-a42a-dded78ad5652,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\"" Aug 13 01:02:35.172761 kubelet[1928]: E0813 01:02:35.172722 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:35.178372 env[1216]: time="2025-08-13T01:02:35.178332599Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:02:35.191280 env[1216]: time="2025-08-13T01:02:35.191194554Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07\"" Aug 13 01:02:35.191743 env[1216]: time="2025-08-13T01:02:35.191721593Z" level=info msg="StartContainer for \"8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07\"" Aug 13 01:02:35.208546 systemd[1]: Started cri-containerd-8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07.scope. Aug 13 01:02:35.236001 env[1216]: time="2025-08-13T01:02:35.235923064Z" level=info msg="StartContainer for \"8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07\" returns successfully" Aug 13 01:02:35.241440 systemd[1]: cri-containerd-8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07.scope: Deactivated successfully. Aug 13 01:02:35.270503 env[1216]: time="2025-08-13T01:02:35.270438533Z" level=info msg="shim disconnected" id=8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07 Aug 13 01:02:35.270503 env[1216]: time="2025-08-13T01:02:35.270489320Z" level=warning msg="cleaning up after shim disconnected" id=8e0f75a5d4f32d1f724048523fa5e0123fa0dc34cf44e25da3f0e27053137b07 namespace=k8s.io Aug 13 01:02:35.270503 env[1216]: time="2025-08-13T01:02:35.270499080Z" level=info msg="cleaning up dead shim" Aug 13 01:02:35.278381 env[1216]: time="2025-08-13T01:02:35.278305511Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" Aug 13 01:02:35.455042 kubelet[1928]: I0813 01:02:35.454930 1928 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9774fe-537b-4656-b5b3-2981f3e7c430" path="/var/lib/kubelet/pods/3f9774fe-537b-4656-b5b3-2981f3e7c430/volumes" Aug 13 01:02:35.494299 kubelet[1928]: E0813 01:02:35.494234 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:35.502839 env[1216]: time="2025-08-13T01:02:35.499702124Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:02:35.512602 env[1216]: time="2025-08-13T01:02:35.512542949Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f\"" Aug 13 01:02:35.513242 env[1216]: time="2025-08-13T01:02:35.513199667Z" level=info msg="StartContainer for \"d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f\"" Aug 13 01:02:35.531640 systemd[1]: Started cri-containerd-d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f.scope. Aug 13 01:02:35.559403 systemd[1]: cri-containerd-d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f.scope: Deactivated successfully. Aug 13 01:02:35.561975 env[1216]: time="2025-08-13T01:02:35.561931766Z" level=info msg="StartContainer for \"d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f\" returns successfully" Aug 13 01:02:35.584590 env[1216]: time="2025-08-13T01:02:35.584517535Z" level=info msg="shim disconnected" id=d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f Aug 13 01:02:35.584590 env[1216]: time="2025-08-13T01:02:35.584563754Z" level=warning msg="cleaning up after shim disconnected" id=d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f namespace=k8s.io Aug 13 01:02:35.584590 env[1216]: time="2025-08-13T01:02:35.584572170Z" level=info msg="cleaning up dead shim" Aug 13 01:02:35.592243 env[1216]: time="2025-08-13T01:02:35.592194058Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n" Aug 13 01:02:36.265607 systemd[1]: run-containerd-runc-k8s.io-d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f-runc.RwSGqW.mount: Deactivated successfully. Aug 13 01:02:36.265697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4259940763093c093b310e93aee9f27d1f4a66f73a3a082bceeb3a6782bba7f-rootfs.mount: Deactivated successfully. Aug 13 01:02:36.497418 kubelet[1928]: E0813 01:02:36.497384 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:36.504803 kubelet[1928]: E0813 01:02:36.503400 1928 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:02:36.586090 env[1216]: time="2025-08-13T01:02:36.586034735Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:02:36.828189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397346847.mount: Deactivated successfully. Aug 13 01:02:37.027207 env[1216]: time="2025-08-13T01:02:37.027086126Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541\"" Aug 13 01:02:37.027836 env[1216]: time="2025-08-13T01:02:37.027805114Z" level=info msg="StartContainer for \"dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541\"" Aug 13 01:02:37.045710 systemd[1]: Started cri-containerd-dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541.scope. Aug 13 01:02:37.136584 systemd[1]: cri-containerd-dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541.scope: Deactivated successfully. Aug 13 01:02:37.137438 env[1216]: time="2025-08-13T01:02:37.137389236Z" level=info msg="StartContainer for \"dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541\" returns successfully" Aug 13 01:02:37.265818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541-rootfs.mount: Deactivated successfully. Aug 13 01:02:37.292134 env[1216]: time="2025-08-13T01:02:37.292041731Z" level=info msg="shim disconnected" id=dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541 Aug 13 01:02:37.292134 env[1216]: time="2025-08-13T01:02:37.292096346Z" level=warning msg="cleaning up after shim disconnected" id=dc0be736ef38fa25b489f71fce45654bdb0483deef5bdb4795849513f41b6541 namespace=k8s.io Aug 13 01:02:37.292134 env[1216]: time="2025-08-13T01:02:37.292106265Z" level=info msg="cleaning up dead shim" Aug 13 01:02:37.299054 env[1216]: time="2025-08-13T01:02:37.299026686Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4008 runtime=io.containerd.runc.v2\n" Aug 13 01:02:37.500986 kubelet[1928]: E0813 01:02:37.500929 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:37.735864 env[1216]: time="2025-08-13T01:02:37.735805369Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:02:38.311308 env[1216]: time="2025-08-13T01:02:38.311226965Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23\"" Aug 13 01:02:38.311960 env[1216]: time="2025-08-13T01:02:38.311894875Z" level=info msg="StartContainer for \"17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23\"" Aug 13 01:02:38.328679 systemd[1]: Started cri-containerd-17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23.scope. Aug 13 01:02:38.348158 systemd[1]: cri-containerd-17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23.scope: Deactivated successfully. Aug 13 01:02:38.575239 env[1216]: time="2025-08-13T01:02:38.575152130Z" level=info msg="StartContainer for \"17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23\" returns successfully" Aug 13 01:02:38.589864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23-rootfs.mount: Deactivated successfully. Aug 13 01:02:38.922318 env[1216]: time="2025-08-13T01:02:38.922167034Z" level=info msg="shim disconnected" id=17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23 Aug 13 01:02:38.922318 env[1216]: time="2025-08-13T01:02:38.922227419Z" level=warning msg="cleaning up after shim disconnected" id=17d241153984e5b41970468a97dd6056244ca84bbf5608bc2e030c9c595abc23 namespace=k8s.io Aug 13 01:02:38.922318 env[1216]: time="2025-08-13T01:02:38.922238010Z" level=info msg="cleaning up dead shim" Aug 13 01:02:38.929047 env[1216]: time="2025-08-13T01:02:38.929008587Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:02:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4061 runtime=io.containerd.runc.v2\n" Aug 13 01:02:39.582267 kubelet[1928]: E0813 01:02:39.582207 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:39.774408 env[1216]: time="2025-08-13T01:02:39.774356278Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:02:39.793550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735618476.mount: Deactivated successfully. Aug 13 01:02:39.796571 env[1216]: time="2025-08-13T01:02:39.796515219Z" level=info msg="CreateContainer within sandbox \"ec6dbca39246cc286a8540c4073aab804917a7fa2c70491f933732e504a9dbfd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40\"" Aug 13 01:02:39.797397 env[1216]: time="2025-08-13T01:02:39.797283382Z" level=info msg="StartContainer for \"6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40\"" Aug 13 01:02:39.819577 systemd[1]: Started cri-containerd-6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40.scope. Aug 13 01:02:39.853231 env[1216]: time="2025-08-13T01:02:39.853001200Z" level=info msg="StartContainer for \"6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40\" returns successfully" Aug 13 01:02:40.147819 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:02:40.452742 kubelet[1928]: E0813 01:02:40.452664 1928 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-pnfrl" podUID="5d6563ab-722b-4d47-acc0-87f09619ac08" Aug 13 01:02:40.587040 kubelet[1928]: E0813 01:02:40.586985 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:40.603085 kubelet[1928]: I0813 01:02:40.603001 1928 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4llqf" podStartSLOduration=6.602984494 podStartE2EDuration="6.602984494s" podCreationTimestamp="2025-08-13 01:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:40.602661094 +0000 UTC m=+109.250810610" watchObservedRunningTime="2025-08-13 01:02:40.602984494 +0000 UTC m=+109.251134000" Aug 13 01:02:40.790894 systemd[1]: run-containerd-runc-k8s.io-6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40-runc.IEd6lY.mount: Deactivated successfully. Aug 13 01:02:41.588388 kubelet[1928]: E0813 01:02:41.588322 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:42.452943 kubelet[1928]: E0813 01:02:42.452907 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:42.590651 kubelet[1928]: E0813 01:02:42.590614 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:42.910575 systemd-networkd[1034]: lxc_health: Link UP Aug 13 01:02:42.918799 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:02:42.921910 systemd-networkd[1034]: lxc_health: Gained carrier Aug 13 01:02:43.593482 kubelet[1928]: E0813 01:02:43.593432 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:44.595051 kubelet[1928]: E0813 01:02:44.595011 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:44.832963 systemd-networkd[1034]: lxc_health: Gained IPv6LL Aug 13 01:02:45.596436 kubelet[1928]: E0813 01:02:45.596377 1928 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 01:02:45.856884 systemd[1]: run-containerd-runc-k8s.io-6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40-runc.8M5XDH.mount: Deactivated successfully. Aug 13 01:02:50.042453 systemd[1]: run-containerd-runc-k8s.io-6612a37468f3a1a0b045df5f3a5249ccb7a5c05ee57449f31fe71b3b38127e40-runc.n8twIQ.mount: Deactivated successfully. Aug 13 01:02:50.090720 sshd[3775]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:50.093335 systemd[1]: sshd@26-10.0.0.83:22-10.0.0.1:54848.service: Deactivated successfully. Aug 13 01:02:50.094117 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:02:50.094733 systemd-logind[1204]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:02:50.095514 systemd-logind[1204]: Removed session 27. Aug 13 01:02:51.448694 env[1216]: time="2025-08-13T01:02:51.448623587Z" level=info msg="StopPodSandbox for \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\"" Aug 13 01:02:51.449074 env[1216]: time="2025-08-13T01:02:51.448738047Z" level=info msg="TearDown network for sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" successfully" Aug 13 01:02:51.449074 env[1216]: time="2025-08-13T01:02:51.448791860Z" level=info msg="StopPodSandbox for \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" returns successfully" Aug 13 01:02:51.449157 env[1216]: time="2025-08-13T01:02:51.449124579Z" level=info msg="RemovePodSandbox for \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\"" Aug 13 01:02:51.449200 env[1216]: time="2025-08-13T01:02:51.449157393Z" level=info msg="Forcibly stopping sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\"" Aug 13 01:02:51.449230 env[1216]: time="2025-08-13T01:02:51.449217999Z" level=info msg="TearDown network for sandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" successfully" Aug 13 01:02:51.463402 env[1216]: time="2025-08-13T01:02:51.463350572Z" level=info msg="RemovePodSandbox \"44dc6788520dbced62d54c5c264dcc595d2b0f81e76d31d36ce5b498b4f8b7a8\" returns successfully" Aug 13 01:02:51.463759 env[1216]: time="2025-08-13T01:02:51.463720783Z" level=info msg="StopPodSandbox for \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\"" Aug 13 01:02:51.463853 env[1216]: time="2025-08-13T01:02:51.463813801Z" level=info msg="TearDown network for sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" successfully" Aug 13 01:02:51.463853 env[1216]: time="2025-08-13T01:02:51.463839981Z" level=info msg="StopPodSandbox for \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" returns successfully" Aug 13 01:02:51.464203 env[1216]: time="2025-08-13T01:02:51.464157562Z" level=info msg="RemovePodSandbox for \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\"" Aug 13 01:02:51.464379 env[1216]: time="2025-08-13T01:02:51.464200715Z" level=info msg="Forcibly stopping sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\"" Aug 13 01:02:51.464379 env[1216]: time="2025-08-13T01:02:51.464286810Z" level=info msg="TearDown network for sandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" successfully" Aug 13 01:02:51.471324 env[1216]: time="2025-08-13T01:02:51.471283332Z" level=info msg="RemovePodSandbox \"0f84b3a6119e27c8fced1e53e2f7b5f77a329f2bf49baf729e2c871089ee98b3\" returns successfully"