Sep 6 00:11:40.120624 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:11:40.120652 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:11:40.120662 kernel: BIOS-provided physical RAM map: Sep 6 00:11:40.120667 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:11:40.120673 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:11:40.120678 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:11:40.120685 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 6 00:11:40.120690 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 6 00:11:40.120706 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:11:40.120712 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 00:11:40.120729 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:11:40.120734 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:11:40.120740 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 6 00:11:40.120746 kernel: NX (Execute Disable) protection: active Sep 6 00:11:40.120755 kernel: SMBIOS 2.8 present. Sep 6 00:11:40.120761 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 6 00:11:40.120767 kernel: Hypervisor detected: KVM Sep 6 00:11:40.120773 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:11:40.120782 kernel: kvm-clock: cpu 0, msr 7919f001, primary cpu clock Sep 6 00:11:40.120788 kernel: kvm-clock: using sched offset of 3322113078 cycles Sep 6 00:11:40.120795 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:11:40.120801 kernel: tsc: Detected 2794.748 MHz processor Sep 6 00:11:40.120807 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:11:40.120816 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:11:40.120822 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 6 00:11:40.120829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:11:40.120835 kernel: Using GB pages for direct mapping Sep 6 00:11:40.120841 kernel: ACPI: Early table checksum verification disabled Sep 6 00:11:40.120847 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 6 00:11:40.120853 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120860 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120866 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120874 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 6 00:11:40.120880 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120886 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120892 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120898 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:11:40.120904 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 6 00:11:40.120910 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 6 00:11:40.120916 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 6 00:11:40.120927 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 6 00:11:40.120934 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 6 00:11:40.120941 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 6 00:11:40.120947 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 6 00:11:40.120954 kernel: No NUMA configuration found Sep 6 00:11:40.120960 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 6 00:11:40.120968 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 6 00:11:40.120975 kernel: Zone ranges: Sep 6 00:11:40.120981 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:11:40.120988 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 6 00:11:40.120994 kernel: Normal empty Sep 6 00:11:40.121001 kernel: Movable zone start for each node Sep 6 00:11:40.121007 kernel: Early memory node ranges Sep 6 00:11:40.121014 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:11:40.121020 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 6 00:11:40.121028 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 6 00:11:40.121037 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:11:40.121044 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:11:40.121050 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 6 00:11:40.121057 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:11:40.121063 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:11:40.121070 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:11:40.121076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:11:40.121083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:11:40.121089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:11:40.121100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:11:40.121107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:11:40.121114 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:11:40.121120 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:11:40.121126 kernel: TSC deadline timer available Sep 6 00:11:40.121133 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 6 00:11:40.121139 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 00:11:40.121146 kernel: kvm-guest: setup PV sched yield Sep 6 00:11:40.121153 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 00:11:40.121161 kernel: Booting paravirtualized kernel on KVM Sep 6 00:11:40.121168 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:11:40.121175 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 6 00:11:40.121181 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 6 00:11:40.121188 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 6 00:11:40.121194 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 00:11:40.121201 kernel: kvm-guest: setup async PF for cpu 0 Sep 6 00:11:40.121207 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 6 00:11:40.121214 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:11:40.121222 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:11:40.121228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 6 00:11:40.121235 kernel: Policy zone: DMA32 Sep 6 00:11:40.121243 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:11:40.121250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:11:40.121256 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:11:40.121263 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:11:40.121270 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:11:40.121278 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 6 00:11:40.121285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:11:40.121291 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:11:40.121298 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:11:40.121304 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:11:40.121312 kernel: rcu: RCU event tracing is enabled. Sep 6 00:11:40.121318 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:11:40.121325 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:11:40.121332 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:11:40.121340 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:11:40.121346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:11:40.121353 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 00:11:40.121359 kernel: random: crng init done Sep 6 00:11:40.121365 kernel: Console: colour VGA+ 80x25 Sep 6 00:11:40.121372 kernel: printk: console [ttyS0] enabled Sep 6 00:11:40.121379 kernel: ACPI: Core revision 20210730 Sep 6 00:11:40.121385 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:11:40.121392 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:11:40.121400 kernel: x2apic enabled Sep 6 00:11:40.121407 kernel: Switched APIC routing to physical x2apic. Sep 6 00:11:40.121415 kernel: kvm-guest: setup PV IPIs Sep 6 00:11:40.121422 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:11:40.121429 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 6 00:11:40.121438 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 6 00:11:40.121445 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 00:11:40.121452 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 00:11:40.121458 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 00:11:40.121471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:11:40.121477 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:11:40.121484 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:11:40.121493 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 00:11:40.121500 kernel: active return thunk: retbleed_return_thunk Sep 6 00:11:40.121507 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 00:11:40.121514 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:11:40.121521 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:11:40.121529 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:11:40.121537 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:11:40.121544 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:11:40.121550 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:11:40.121557 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:11:40.121564 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:11:40.121571 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:11:40.121578 kernel: LSM: Security Framework initializing Sep 6 00:11:40.121586 kernel: SELinux: Initializing. Sep 6 00:11:40.121593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:11:40.121600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:11:40.121607 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 00:11:40.121614 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 00:11:40.121621 kernel: ... version: 0 Sep 6 00:11:40.121627 kernel: ... bit width: 48 Sep 6 00:11:40.121634 kernel: ... generic registers: 6 Sep 6 00:11:40.121641 kernel: ... value mask: 0000ffffffffffff Sep 6 00:11:40.121649 kernel: ... max period: 00007fffffffffff Sep 6 00:11:40.121656 kernel: ... fixed-purpose events: 0 Sep 6 00:11:40.121663 kernel: ... event mask: 000000000000003f Sep 6 00:11:40.121669 kernel: signal: max sigframe size: 1776 Sep 6 00:11:40.121676 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:11:40.121683 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:11:40.121690 kernel: x86: Booting SMP configuration: Sep 6 00:11:40.121703 kernel: .... node #0, CPUs: #1 Sep 6 00:11:40.121711 kernel: kvm-clock: cpu 1, msr 7919f041, secondary cpu clock Sep 6 00:11:40.121728 kernel: kvm-guest: setup async PF for cpu 1 Sep 6 00:11:40.121737 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 6 00:11:40.121744 kernel: #2 Sep 6 00:11:40.121751 kernel: kvm-clock: cpu 2, msr 7919f081, secondary cpu clock Sep 6 00:11:40.121758 kernel: kvm-guest: setup async PF for cpu 2 Sep 6 00:11:40.121765 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 6 00:11:40.121774 kernel: #3 Sep 6 00:11:40.121781 kernel: kvm-clock: cpu 3, msr 7919f0c1, secondary cpu clock Sep 6 00:11:40.121789 kernel: kvm-guest: setup async PF for cpu 3 Sep 6 00:11:40.121795 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 6 00:11:40.121803 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:11:40.121811 kernel: smpboot: Max logical packages: 1 Sep 6 00:11:40.121818 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 6 00:11:40.121824 kernel: devtmpfs: initialized Sep 6 00:11:40.121831 kernel: x86/mm: Memory block size: 128MB Sep 6 00:11:40.121838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:11:40.121845 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:11:40.121852 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:11:40.121859 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:11:40.121867 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:11:40.121874 kernel: audit: type=2000 audit(1757117499.705:1): state=initialized audit_enabled=0 res=1 Sep 6 00:11:40.121881 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:11:40.121887 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:11:40.121894 kernel: cpuidle: using governor menu Sep 6 00:11:40.121901 kernel: ACPI: bus type PCI registered Sep 6 00:11:40.121908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:11:40.121919 kernel: dca service started, version 1.12.1 Sep 6 00:11:40.121928 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 00:11:40.121963 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 00:11:40.121977 kernel: PCI: Using configuration type 1 for base access Sep 6 00:11:40.121988 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:11:40.121995 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:11:40.122002 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:11:40.122009 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:11:40.122016 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:11:40.122023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:11:40.122030 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:11:40.122039 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:11:40.122046 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:11:40.122053 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:11:40.122060 kernel: ACPI: Interpreter enabled Sep 6 00:11:40.122067 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:11:40.122073 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:11:40.122081 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:11:40.122088 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 00:11:40.122094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:11:40.122447 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:11:40.122528 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 00:11:40.122605 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 00:11:40.122615 kernel: PCI host bridge to bus 0000:00 Sep 6 00:11:40.122711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:11:40.122799 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:11:40.122873 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:11:40.128380 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 00:11:40.128460 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 00:11:40.128528 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 6 00:11:40.128594 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:11:40.128689 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 00:11:40.128799 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 6 00:11:40.128885 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 6 00:11:40.128987 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 6 00:11:40.129092 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 6 00:11:40.129167 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:11:40.129262 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:11:40.129342 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 6 00:11:40.129423 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 6 00:11:40.129503 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 6 00:11:40.129592 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:11:40.129669 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:11:40.129813 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 6 00:11:40.129889 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 6 00:11:40.129970 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:11:40.130046 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 6 00:11:40.130126 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 6 00:11:40.130199 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 6 00:11:40.130275 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 6 00:11:40.130361 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 00:11:40.130437 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 00:11:40.130521 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 00:11:40.130600 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 6 00:11:40.130677 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 6 00:11:40.130782 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 00:11:40.130861 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 6 00:11:40.130870 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:11:40.130877 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:11:40.130884 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:11:40.130891 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:11:40.130902 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 00:11:40.130909 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 00:11:40.130916 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 00:11:40.130923 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 00:11:40.130930 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 00:11:40.130937 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 00:11:40.130944 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 00:11:40.130951 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 00:11:40.130958 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 00:11:40.130967 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 00:11:40.130974 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 00:11:40.130981 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 00:11:40.130988 kernel: iommu: Default domain type: Translated Sep 6 00:11:40.130995 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:11:40.131090 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 00:11:40.131186 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:11:40.131294 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 00:11:40.131308 kernel: vgaarb: loaded Sep 6 00:11:40.131316 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:11:40.131323 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:11:40.131330 kernel: PTP clock support registered Sep 6 00:11:40.131348 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:11:40.131357 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:11:40.131364 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:11:40.131372 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 6 00:11:40.131378 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:11:40.131388 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:11:40.131395 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:11:40.131414 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:11:40.131422 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:11:40.131429 kernel: pnp: PnP ACPI init Sep 6 00:11:40.131544 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 00:11:40.131557 kernel: pnp: PnP ACPI: found 6 devices Sep 6 00:11:40.131564 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:11:40.131574 kernel: NET: Registered PF_INET protocol family Sep 6 00:11:40.131582 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:11:40.131589 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:11:40.131596 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:11:40.131603 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:11:40.131610 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:11:40.131617 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:11:40.131637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:11:40.131646 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:11:40.131655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:11:40.131663 kernel: NET: Registered PF_XDP protocol family Sep 6 00:11:40.131760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:11:40.131831 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:11:40.131908 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:11:40.131991 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 00:11:40.132095 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 00:11:40.132179 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 6 00:11:40.132190 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:11:40.132201 kernel: Initialise system trusted keyrings Sep 6 00:11:40.132208 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:11:40.132215 kernel: Key type asymmetric registered Sep 6 00:11:40.132223 kernel: Asymmetric key parser 'x509' registered Sep 6 00:11:40.132230 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:11:40.132249 kernel: io scheduler mq-deadline registered Sep 6 00:11:40.132259 kernel: io scheduler kyber registered Sep 6 00:11:40.132267 kernel: io scheduler bfq registered Sep 6 00:11:40.132276 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:11:40.132288 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 00:11:40.132296 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 00:11:40.132304 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 00:11:40.132325 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:11:40.132335 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:11:40.132343 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:11:40.132351 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:11:40.132360 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:11:40.132482 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 00:11:40.132510 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:11:40.132613 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 00:11:40.132749 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T00:11:39 UTC (1757117499) Sep 6 00:11:40.132841 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 00:11:40.132863 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:11:40.132871 kernel: Segment Routing with IPv6 Sep 6 00:11:40.132878 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:11:40.132885 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:11:40.132900 kernel: Key type dns_resolver registered Sep 6 00:11:40.132914 kernel: IPI shorthand broadcast: enabled Sep 6 00:11:40.132926 kernel: sched_clock: Marking stable (408006817, 101214434)->(564824879, -55603628) Sep 6 00:11:40.132933 kernel: registered taskstats version 1 Sep 6 00:11:40.132940 kernel: Loading compiled-in X.509 certificates Sep 6 00:11:40.132947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:11:40.132954 kernel: Key type .fscrypt registered Sep 6 00:11:40.132972 kernel: Key type fscrypt-provisioning registered Sep 6 00:11:40.132981 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:11:40.132991 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:11:40.132998 kernel: ima: No architecture policies found Sep 6 00:11:40.133005 kernel: clk: Disabling unused clocks Sep 6 00:11:40.133012 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:11:40.133019 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:11:40.133026 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:11:40.133033 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:11:40.133040 kernel: Run /init as init process Sep 6 00:11:40.133061 kernel: with arguments: Sep 6 00:11:40.133068 kernel: /init Sep 6 00:11:40.133075 kernel: with environment: Sep 6 00:11:40.133082 kernel: HOME=/ Sep 6 00:11:40.133089 kernel: TERM=linux Sep 6 00:11:40.133096 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:11:40.133120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:11:40.133131 systemd[1]: Detected virtualization kvm. Sep 6 00:11:40.133142 systemd[1]: Detected architecture x86-64. Sep 6 00:11:40.133149 systemd[1]: Running in initrd. Sep 6 00:11:40.133168 systemd[1]: No hostname configured, using default hostname. Sep 6 00:11:40.133176 systemd[1]: Hostname set to . Sep 6 00:11:40.133184 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:11:40.133192 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:11:40.133199 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:11:40.133218 systemd[1]: Reached target cryptsetup.target. Sep 6 00:11:40.133226 systemd[1]: Reached target paths.target. Sep 6 00:11:40.133237 systemd[1]: Reached target slices.target. Sep 6 00:11:40.133252 systemd[1]: Reached target swap.target. Sep 6 00:11:40.133272 systemd[1]: Reached target timers.target. Sep 6 00:11:40.133281 systemd[1]: Listening on iscsid.socket. Sep 6 00:11:40.133289 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:11:40.133299 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:11:40.133307 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:11:40.133315 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:11:40.133323 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:11:40.133330 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:11:40.133339 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:11:40.133346 systemd[1]: Reached target sockets.target. Sep 6 00:11:40.133354 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:11:40.133362 systemd[1]: Finished network-cleanup.service. Sep 6 00:11:40.133371 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:11:40.133379 systemd[1]: Starting systemd-journald.service... Sep 6 00:11:40.133386 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:11:40.133394 systemd[1]: Starting systemd-resolved.service... Sep 6 00:11:40.133402 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:11:40.133414 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:11:40.133422 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:11:40.133430 kernel: audit: type=1130 audit(1757117500.118:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.133439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:11:40.133448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:11:40.133463 systemd-journald[198]: Journal started Sep 6 00:11:40.133507 systemd-journald[198]: Runtime Journal (/run/log/journal/ee830237f96043bf96b05e23fe4f8fc3) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:11:40.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.123381 systemd-modules-load[199]: Inserted module 'overlay' Sep 6 00:11:40.180583 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:11:40.180609 systemd[1]: Started systemd-journald.service. Sep 6 00:11:40.180621 kernel: audit: type=1130 audit(1757117500.160:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.180632 kernel: audit: type=1130 audit(1757117500.164:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.180641 kernel: audit: type=1130 audit(1757117500.170:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.180650 kernel: audit: type=1130 audit(1757117500.170:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.141795 systemd-resolved[200]: Positive Trust Anchors: Sep 6 00:11:40.141804 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:11:40.141831 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:11:40.189165 kernel: Bridge firewalling registered Sep 6 00:11:40.144379 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 6 00:11:40.164959 systemd[1]: Started systemd-resolved.service. Sep 6 00:11:40.171575 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:11:40.174657 systemd[1]: Reached target nss-lookup.target. Sep 6 00:11:40.178933 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:11:40.198509 kernel: audit: type=1130 audit(1757117500.193:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.189070 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 6 00:11:40.192300 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:11:40.194709 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:11:40.205126 dracut-cmdline[215]: dracut-dracut-053 Sep 6 00:11:40.207525 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:11:40.212571 kernel: SCSI subsystem initialized Sep 6 00:11:40.224351 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:11:40.224375 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:11:40.225574 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:11:40.228240 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 6 00:11:40.229004 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:11:40.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.234657 kernel: audit: type=1130 audit(1757117500.229:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.233663 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:11:40.241383 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:11:40.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.245756 kernel: audit: type=1130 audit(1757117500.242:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.270755 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:11:40.292744 kernel: iscsi: registered transport (tcp) Sep 6 00:11:40.313744 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:11:40.313783 kernel: QLogic iSCSI HBA Driver Sep 6 00:11:40.339591 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:11:40.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.340582 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:11:40.345219 kernel: audit: type=1130 audit(1757117500.338:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.386755 kernel: raid6: avx2x4 gen() 30626 MB/s Sep 6 00:11:40.403751 kernel: raid6: avx2x4 xor() 7905 MB/s Sep 6 00:11:40.420750 kernel: raid6: avx2x2 gen() 32211 MB/s Sep 6 00:11:40.437759 kernel: raid6: avx2x2 xor() 19226 MB/s Sep 6 00:11:40.454751 kernel: raid6: avx2x1 gen() 26187 MB/s Sep 6 00:11:40.471764 kernel: raid6: avx2x1 xor() 15342 MB/s Sep 6 00:11:40.488757 kernel: raid6: sse2x4 gen() 14777 MB/s Sep 6 00:11:40.505748 kernel: raid6: sse2x4 xor() 7639 MB/s Sep 6 00:11:40.522748 kernel: raid6: sse2x2 gen() 16167 MB/s Sep 6 00:11:40.539747 kernel: raid6: sse2x2 xor() 9837 MB/s Sep 6 00:11:40.556745 kernel: raid6: sse2x1 gen() 12064 MB/s Sep 6 00:11:40.574122 kernel: raid6: sse2x1 xor() 7787 MB/s Sep 6 00:11:40.574149 kernel: raid6: using algorithm avx2x2 gen() 32211 MB/s Sep 6 00:11:40.574162 kernel: raid6: .... xor() 19226 MB/s, rmw enabled Sep 6 00:11:40.574808 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:11:40.586740 kernel: xor: automatically using best checksumming function avx Sep 6 00:11:40.676755 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:11:40.685068 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:11:40.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.685000 audit: BPF prog-id=7 op=LOAD Sep 6 00:11:40.685000 audit: BPF prog-id=8 op=LOAD Sep 6 00:11:40.687103 systemd[1]: Starting systemd-udevd.service... Sep 6 00:11:40.699486 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 6 00:11:40.704210 systemd[1]: Started systemd-udevd.service. Sep 6 00:11:40.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.705257 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:11:40.715472 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Sep 6 00:11:40.738746 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:11:40.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.740234 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:11:40.773686 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:11:40.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:40.807869 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:11:40.814157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:11:40.814174 kernel: GPT:9289727 != 19775487 Sep 6 00:11:40.814183 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:11:40.814191 kernel: GPT:9289727 != 19775487 Sep 6 00:11:40.814200 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:11:40.814209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:11:40.816737 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:11:40.828738 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:11:40.828778 kernel: AES CTR mode by8 optimization enabled Sep 6 00:11:40.828787 kernel: libata version 3.00 loaded. Sep 6 00:11:40.837739 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 00:11:40.853192 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 00:11:40.853209 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 00:11:40.853307 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 00:11:40.853389 kernel: scsi host0: ahci Sep 6 00:11:40.853492 kernel: scsi host1: ahci Sep 6 00:11:40.853583 kernel: scsi host2: ahci Sep 6 00:11:40.853685 kernel: scsi host3: ahci Sep 6 00:11:40.853798 kernel: scsi host4: ahci Sep 6 00:11:40.853903 kernel: scsi host5: ahci Sep 6 00:11:40.853992 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 6 00:11:40.854005 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 6 00:11:40.854013 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 6 00:11:40.854022 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 6 00:11:40.854031 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 6 00:11:40.854039 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 6 00:11:40.865744 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (436) Sep 6 00:11:40.867948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:11:40.895622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:11:40.904112 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:11:40.907908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:11:40.911653 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:11:40.913655 systemd[1]: Starting disk-uuid.service... Sep 6 00:11:40.923815 disk-uuid[521]: Primary Header is updated. Sep 6 00:11:40.923815 disk-uuid[521]: Secondary Entries is updated. Sep 6 00:11:40.923815 disk-uuid[521]: Secondary Header is updated. Sep 6 00:11:40.928738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:11:40.932736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:11:41.159910 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 00:11:41.160001 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 00:11:41.167740 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:11:41.167764 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:11:41.168751 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 00:11:41.169743 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:11:41.170752 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 00:11:41.171770 kernel: ata3.00: applying bridge limits Sep 6 00:11:41.171790 kernel: ata3.00: configured for UDMA/100 Sep 6 00:11:41.174741 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:11:41.208056 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 00:11:41.225782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:11:41.225810 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:11:41.932760 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:11:41.932930 disk-uuid[522]: The operation has completed successfully. Sep 6 00:11:41.969152 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:11:41.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:41.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:41.969253 systemd[1]: Finished disk-uuid.service. Sep 6 00:11:41.970184 systemd[1]: Starting verity-setup.service... Sep 6 00:11:41.982754 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 6 00:11:42.003040 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:11:42.005310 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:11:42.008166 systemd[1]: Finished verity-setup.service. Sep 6 00:11:42.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.068611 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:11:42.070011 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:11:42.070064 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:11:42.071988 systemd[1]: Starting ignition-setup.service... Sep 6 00:11:42.073854 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:11:42.081858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:11:42.081887 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:11:42.081902 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:11:42.091111 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:11:42.100123 systemd[1]: Finished ignition-setup.service. Sep 6 00:11:42.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.101685 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:11:42.157061 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:11:42.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.158000 audit: BPF prog-id=9 op=LOAD Sep 6 00:11:42.159332 systemd[1]: Starting systemd-networkd.service... Sep 6 00:11:42.193171 systemd-networkd[712]: lo: Link UP Sep 6 00:11:42.193181 systemd-networkd[712]: lo: Gained carrier Sep 6 00:11:42.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.196932 ignition[639]: Ignition 2.14.0 Sep 6 00:11:42.193781 systemd-networkd[712]: Enumeration completed Sep 6 00:11:42.196940 ignition[639]: Stage: fetch-offline Sep 6 00:11:42.193871 systemd[1]: Started systemd-networkd.service. Sep 6 00:11:42.197003 ignition[639]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:42.194482 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:11:42.197012 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:42.195140 systemd[1]: Reached target network.target. Sep 6 00:11:42.197140 ignition[639]: parsed url from cmdline: "" Sep 6 00:11:42.197915 systemd-networkd[712]: eth0: Link UP Sep 6 00:11:42.197145 ignition[639]: no config URL provided Sep 6 00:11:42.197919 systemd-networkd[712]: eth0: Gained carrier Sep 6 00:11:42.197159 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:11:42.198123 systemd[1]: Starting iscsiuio.service... Sep 6 00:11:42.197167 ignition[639]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:11:42.197194 ignition[639]: op(1): [started] loading QEMU firmware config module Sep 6 00:11:42.197201 ignition[639]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:11:42.204906 ignition[639]: op(1): [finished] loading QEMU firmware config module Sep 6 00:11:42.226084 systemd[1]: Started iscsiuio.service. Sep 6 00:11:42.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.228582 systemd[1]: Starting iscsid.service... Sep 6 00:11:42.232103 iscsid[718]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:11:42.232103 iscsid[718]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:11:42.232103 iscsid[718]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:11:42.232103 iscsid[718]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:11:42.241401 iscsid[718]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:11:42.241401 iscsid[718]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:11:42.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.238693 systemd[1]: Started iscsid.service. Sep 6 00:11:42.245231 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:11:42.257730 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:11:42.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.259611 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:11:42.261329 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:11:42.263097 systemd[1]: Reached target remote-fs.target. Sep 6 00:11:42.265225 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:11:42.274444 ignition[639]: parsing config with SHA512: b4160dd49be7fc935ba66d69b852a93e35a553622c2c7d5a4fdcac22c9259d582afc6134acca4019fc6d763ca79bfbc5037970385d02153f8c91d5b2dcd58212 Sep 6 00:11:42.276225 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:11:42.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.283799 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:11:42.286725 unknown[639]: fetched base config from "system" Sep 6 00:11:42.286734 unknown[639]: fetched user config from "qemu" Sep 6 00:11:42.287272 ignition[639]: fetch-offline: fetch-offline passed Sep 6 00:11:42.287333 ignition[639]: Ignition finished successfully Sep 6 00:11:42.290425 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:11:42.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.291390 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:11:42.292117 systemd[1]: Starting ignition-kargs.service... Sep 6 00:11:42.309069 ignition[733]: Ignition 2.14.0 Sep 6 00:11:42.309079 ignition[733]: Stage: kargs Sep 6 00:11:42.309182 ignition[733]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:42.309193 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:42.311807 systemd[1]: Finished ignition-kargs.service. Sep 6 00:11:42.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.310447 ignition[733]: kargs: kargs passed Sep 6 00:11:42.310487 ignition[733]: Ignition finished successfully Sep 6 00:11:42.314270 systemd[1]: Starting ignition-disks.service... Sep 6 00:11:42.324550 ignition[739]: Ignition 2.14.0 Sep 6 00:11:42.324558 ignition[739]: Stage: disks Sep 6 00:11:42.324673 ignition[739]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:42.324682 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:42.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.326773 systemd[1]: Finished ignition-disks.service. Sep 6 00:11:42.326052 ignition[739]: disks: disks passed Sep 6 00:11:42.328453 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:11:42.326091 ignition[739]: Ignition finished successfully Sep 6 00:11:42.330745 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:11:42.331745 systemd[1]: Reached target local-fs.target. Sep 6 00:11:42.332611 systemd[1]: Reached target sysinit.target. Sep 6 00:11:42.334324 systemd[1]: Reached target basic.target. Sep 6 00:11:42.335907 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:11:42.347258 systemd-fsck[747]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:11:42.352324 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:11:42.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.355433 systemd[1]: Mounting sysroot.mount... Sep 6 00:11:42.362499 systemd[1]: Mounted sysroot.mount. Sep 6 00:11:42.363911 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:11:42.362594 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:11:42.365869 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:11:42.366201 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:11:42.366240 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:11:42.366262 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:11:42.375133 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:11:42.376761 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:11:42.382375 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:11:42.386401 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:11:42.389952 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:11:42.392573 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:11:42.415032 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:11:42.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.416837 systemd[1]: Starting ignition-mount.service... Sep 6 00:11:42.418239 systemd[1]: Starting sysroot-boot.service... Sep 6 00:11:42.425281 bash[799]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:11:42.452330 systemd[1]: Finished sysroot-boot.service. Sep 6 00:11:42.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.455320 ignition[800]: INFO : Ignition 2.14.0 Sep 6 00:11:42.455320 ignition[800]: INFO : Stage: mount Sep 6 00:11:42.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:42.458206 ignition[800]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:42.458206 ignition[800]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:42.458206 ignition[800]: INFO : mount: mount passed Sep 6 00:11:42.458206 ignition[800]: INFO : Ignition finished successfully Sep 6 00:11:42.457022 systemd[1]: Finished ignition-mount.service. Sep 6 00:11:43.017260 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:11:43.026593 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Sep 6 00:11:43.026652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:11:43.026668 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:11:43.027383 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:11:43.032865 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:11:43.034437 systemd[1]: Starting ignition-files.service... Sep 6 00:11:43.061096 ignition[829]: INFO : Ignition 2.14.0 Sep 6 00:11:43.061096 ignition[829]: INFO : Stage: files Sep 6 00:11:43.063319 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:43.063319 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:43.063319 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:11:43.067230 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:11:43.067230 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:11:43.067230 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:11:43.067230 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:11:43.067230 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:11:43.067011 unknown[829]: wrote ssh authorized keys file for user: core Sep 6 00:11:43.076061 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:11:43.076061 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:11:43.123694 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:11:43.387790 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:11:43.390153 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:11:43.390153 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:11:43.782274 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:11:44.034059 systemd-networkd[712]: eth0: Gained IPv6LL Sep 6 00:11:44.202262 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:11:44.202262 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:11:44.206956 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:11:44.714461 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:11:46.289847 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:11:46.289847 ignition[829]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:11:46.294262 ignition[829]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:11:46.338782 ignition[829]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:11:46.340475 ignition[829]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:11:46.340475 ignition[829]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:11:46.340475 ignition[829]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:11:46.340475 ignition[829]: INFO : files: files passed Sep 6 00:11:46.340475 ignition[829]: INFO : Ignition finished successfully Sep 6 00:11:46.347443 systemd[1]: Finished ignition-files.service. Sep 6 00:11:46.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.349376 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:11:46.355550 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 6 00:11:46.355576 kernel: audit: type=1130 audit(1757117506.348:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.352967 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:11:46.353592 systemd[1]: Starting ignition-quench.service... Sep 6 00:11:46.366089 kernel: audit: type=1130 audit(1757117506.357:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.366107 kernel: audit: type=1131 audit(1757117506.358:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.366121 kernel: audit: type=1130 audit(1757117506.366:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.366206 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:11:46.356541 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:11:46.372929 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:11:46.356613 systemd[1]: Finished ignition-quench.service. Sep 6 00:11:46.358981 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:11:46.366200 systemd[1]: Reached target ignition-complete.target. Sep 6 00:11:46.369697 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:11:46.381576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:11:46.381653 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:11:46.390184 kernel: audit: type=1130 audit(1757117506.383:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.390200 kernel: audit: type=1131 audit(1757117506.383:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.383393 systemd[1]: Reached target initrd-fs.target. Sep 6 00:11:46.390186 systemd[1]: Reached target initrd.target. Sep 6 00:11:46.390961 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:11:46.391679 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:11:46.400972 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:11:46.405962 kernel: audit: type=1130 audit(1757117506.401:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.402427 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:11:46.410224 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:11:46.411144 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:11:46.412606 systemd[1]: Stopped target timers.target. Sep 6 00:11:46.414114 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:11:46.419813 kernel: audit: type=1131 audit(1757117506.414:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.414201 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:11:46.415650 systemd[1]: Stopped target initrd.target. Sep 6 00:11:46.419877 systemd[1]: Stopped target basic.target. Sep 6 00:11:46.421356 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:11:46.422887 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:11:46.424449 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:11:46.426137 systemd[1]: Stopped target remote-fs.target. Sep 6 00:11:46.427711 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:11:46.429345 systemd[1]: Stopped target sysinit.target. Sep 6 00:11:46.430862 systemd[1]: Stopped target local-fs.target. Sep 6 00:11:46.432554 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:11:46.434256 systemd[1]: Stopped target swap.target. Sep 6 00:11:46.442234 kernel: audit: type=1131 audit(1757117506.437:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.435879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:11:46.435982 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:11:46.448921 kernel: audit: type=1131 audit(1757117506.444:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.437826 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:11:46.442266 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:11:46.442385 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:11:46.444441 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:11:46.444555 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:11:46.449031 systemd[1]: Stopped target paths.target. Sep 6 00:11:46.450676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:11:46.455825 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:11:46.457575 systemd[1]: Stopped target slices.target. Sep 6 00:11:46.459121 systemd[1]: Stopped target sockets.target. Sep 6 00:11:46.460610 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:11:46.460728 systemd[1]: Closed iscsid.socket. Sep 6 00:11:46.462294 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:11:46.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.462372 systemd[1]: Closed iscsiuio.socket. Sep 6 00:11:46.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.463764 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:11:46.463864 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:11:46.465401 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:11:46.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.465485 systemd[1]: Stopped ignition-files.service. Sep 6 00:11:46.468231 systemd[1]: Stopping ignition-mount.service... Sep 6 00:11:46.469569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:11:46.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.477521 ignition[869]: INFO : Ignition 2.14.0 Sep 6 00:11:46.477521 ignition[869]: INFO : Stage: umount Sep 6 00:11:46.469669 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:11:46.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.480227 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:11:46.480227 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:11:46.480227 ignition[869]: INFO : umount: umount passed Sep 6 00:11:46.480227 ignition[869]: INFO : Ignition finished successfully Sep 6 00:11:46.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.472581 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:11:46.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.474041 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:11:46.474188 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:11:46.475861 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:11:46.475964 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:11:46.480264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:11:46.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.480349 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:11:46.482299 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:11:46.482369 systemd[1]: Stopped ignition-mount.service. Sep 6 00:11:46.484251 systemd[1]: Stopped target network.target. Sep 6 00:11:46.485757 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:11:46.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.485806 systemd[1]: Stopped ignition-disks.service. Sep 6 00:11:46.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.487389 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:11:46.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.487420 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:11:46.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.488457 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:11:46.488496 systemd[1]: Stopped ignition-setup.service. Sep 6 00:11:46.490113 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:11:46.492087 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:11:46.516000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:11:46.494429 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:11:46.494764 systemd-networkd[712]: eth0: DHCPv6 lease lost Sep 6 00:11:46.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.517000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:11:46.494885 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:11:46.494966 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:11:46.496108 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:11:46.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.496186 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:11:46.499846 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:11:46.499871 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:11:46.501544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:11:46.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.501579 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:11:46.503870 systemd[1]: Stopping network-cleanup.service... Sep 6 00:11:46.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.504892 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:11:46.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.504937 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:11:46.505866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:11:46.505897 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:11:46.507307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:11:46.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:46.507347 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:11:46.507522 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:11:46.508737 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:11:46.509233 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:11:46.509332 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:11:46.516405 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:11:46.516488 systemd[1]: Stopped network-cleanup.service. Sep 6 00:11:46.519949 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:11:46.520064 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:11:46.522808 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:11:46.522842 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:11:46.524371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:11:46.524397 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:11:46.526226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:11:46.526269 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:11:46.527777 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:11:46.527808 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:11:46.529446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:11:46.529477 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:11:46.531681 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:11:46.532631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:11:46.532670 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:11:46.536955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:11:46.537026 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:11:46.538232 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:11:46.540606 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:11:46.557807 systemd[1]: Switching root. Sep 6 00:11:46.573900 iscsid[718]: iscsid shutting down. Sep 6 00:11:46.574645 systemd-journald[198]: Journal stopped Sep 6 00:11:51.106409 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 6 00:11:51.106482 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:11:51.106502 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:11:51.106517 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:11:51.106527 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:11:51.106536 kernel: SELinux: policy capability open_perms=1 Sep 6 00:11:51.106546 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:11:51.106555 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:11:51.106565 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:11:51.106574 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:11:51.106587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:11:51.106597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:11:51.106608 systemd[1]: Successfully loaded SELinux policy in 39.415ms. Sep 6 00:11:51.106622 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.802ms. Sep 6 00:11:51.106633 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:11:51.106644 systemd[1]: Detected virtualization kvm. Sep 6 00:11:51.106656 systemd[1]: Detected architecture x86-64. Sep 6 00:11:51.106669 systemd[1]: Detected first boot. Sep 6 00:11:51.106688 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:11:51.106700 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:11:51.106710 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:11:51.106744 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:11:51.106756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:11:51.106767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:11:51.106782 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:11:51.106792 systemd[1]: Stopped iscsiuio.service. Sep 6 00:11:51.106803 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:11:51.106815 systemd[1]: Stopped iscsid.service. Sep 6 00:11:51.106825 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:11:51.106836 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:11:51.106846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:11:51.106856 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:11:51.106867 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:11:51.106879 systemd[1]: Created slice system-getty.slice. Sep 6 00:11:51.106889 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:11:51.106899 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:11:51.106910 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:11:51.106920 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:11:51.106930 systemd[1]: Created slice user.slice. Sep 6 00:11:51.106946 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:11:51.106957 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:11:51.106969 systemd[1]: Set up automount boot.automount. Sep 6 00:11:51.106980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:11:51.106990 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:11:51.107000 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:11:51.107010 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:11:51.107020 systemd[1]: Reached target integritysetup.target. Sep 6 00:11:51.107031 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:11:51.107044 systemd[1]: Reached target remote-fs.target. Sep 6 00:11:51.107055 systemd[1]: Reached target slices.target. Sep 6 00:11:51.107066 systemd[1]: Reached target swap.target. Sep 6 00:11:51.107077 systemd[1]: Reached target torcx.target. Sep 6 00:11:51.107090 systemd[1]: Reached target veritysetup.target. Sep 6 00:11:51.107100 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:11:51.107110 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:11:51.107121 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:11:51.107131 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:11:51.107142 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:11:51.107152 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:11:51.107165 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:11:51.107179 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:11:51.107189 systemd[1]: Mounting media.mount... Sep 6 00:11:51.107200 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:51.107210 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:11:51.107220 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:11:51.107231 systemd[1]: Mounting tmp.mount... Sep 6 00:11:51.107241 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:11:51.107252 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:11:51.107263 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:11:51.107274 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:11:51.107287 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:11:51.107297 systemd[1]: Starting modprobe@drm.service... Sep 6 00:11:51.107307 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:11:51.107317 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:11:51.107328 systemd[1]: Starting modprobe@loop.service... Sep 6 00:11:51.107339 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:11:51.107349 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:11:51.107361 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:11:51.107371 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:11:51.107391 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:11:51.107401 systemd[1]: Stopped systemd-journald.service. Sep 6 00:11:51.107415 kernel: fuse: init (API version 7.34) Sep 6 00:11:51.107425 kernel: loop: module loaded Sep 6 00:11:51.107435 systemd[1]: Starting systemd-journald.service... Sep 6 00:11:51.107445 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:11:51.107456 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:11:51.107469 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:11:51.107479 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:11:51.107490 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:11:51.107500 systemd[1]: Stopped verity-setup.service. Sep 6 00:11:51.107511 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:51.107524 systemd-journald[979]: Journal started Sep 6 00:11:51.107564 systemd-journald[979]: Runtime Journal (/run/log/journal/ee830237f96043bf96b05e23fe4f8fc3) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:11:46.637000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:11:47.113000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:11:47.113000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:11:47.113000 audit: BPF prog-id=10 op=LOAD Sep 6 00:11:47.113000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:11:47.113000 audit: BPF prog-id=11 op=LOAD Sep 6 00:11:47.113000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:11:47.153000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:11:47.153000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00015589c a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:11:47.153000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:11:47.154000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:11:47.154000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155975 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:11:47.154000 audit: CWD cwd="/" Sep 6 00:11:47.154000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:47.154000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:47.154000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:11:50.940000 audit: BPF prog-id=12 op=LOAD Sep 6 00:11:50.940000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:11:50.940000 audit: BPF prog-id=13 op=LOAD Sep 6 00:11:50.940000 audit: BPF prog-id=14 op=LOAD Sep 6 00:11:50.940000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:11:50.940000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:11:50.941000 audit: BPF prog-id=15 op=LOAD Sep 6 00:11:50.941000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:11:50.941000 audit: BPF prog-id=16 op=LOAD Sep 6 00:11:50.941000 audit: BPF prog-id=17 op=LOAD Sep 6 00:11:50.941000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:11:50.941000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:11:50.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:50.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:50.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:50.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:50.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:50.956000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:11:51.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.084000 audit: BPF prog-id=18 op=LOAD Sep 6 00:11:51.084000 audit: BPF prog-id=19 op=LOAD Sep 6 00:11:51.084000 audit: BPF prog-id=20 op=LOAD Sep 6 00:11:51.084000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:11:51.085000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:11:51.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.103000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:11:51.103000 audit[979]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff859b5d00 a2=4000 a3=7fff859b5d9c items=0 ppid=1 pid=979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:11:51.103000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:11:50.938960 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:11:47.152515 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:11:50.938972 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:11:47.152748 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:11:50.943493 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:11:47.152765 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:11:47.152793 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:11:47.152802 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:11:47.152829 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:11:47.152840 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:11:47.153027 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:11:47.153063 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:11:47.153075 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:11:47.153640 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:11:47.153671 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:11:47.153687 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:11:47.153700 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:11:47.153731 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:11:47.153744 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:11:50.623868 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:11:50.624184 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:11:50.624292 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:11:50.624495 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:11:50.624542 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:11:50.624758 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2025-09-06T00:11:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:11:51.110771 systemd[1]: Started systemd-journald.service. Sep 6 00:11:51.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.111597 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:11:51.112679 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:11:51.113728 systemd[1]: Mounted media.mount. Sep 6 00:11:51.114665 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:11:51.115754 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:11:51.116957 systemd[1]: Mounted tmp.mount. Sep 6 00:11:51.118217 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:11:51.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.119603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:11:51.119805 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:11:51.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.121450 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:11:51.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.122791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:11:51.122981 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:11:51.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.124174 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:11:51.124443 systemd[1]: Finished modprobe@drm.service. Sep 6 00:11:51.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.125592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:11:51.125704 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:11:51.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.127197 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:11:51.127367 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:11:51.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.128703 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:11:51.128860 systemd[1]: Finished modprobe@loop.service. Sep 6 00:11:51.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.130091 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:11:51.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.131311 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:11:51.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.132672 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:11:51.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.134008 systemd[1]: Reached target network-pre.target. Sep 6 00:11:51.136186 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:11:51.138241 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:11:51.139460 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:11:51.140881 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:11:51.142865 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:11:51.144032 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:11:51.145117 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:11:51.146230 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:11:51.147221 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:11:51.149280 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:11:51.153686 systemd-journald[979]: Time spent on flushing to /var/log/journal/ee830237f96043bf96b05e23fe4f8fc3 is 18.740ms for 1099 entries. Sep 6 00:11:51.153686 systemd-journald[979]: System Journal (/var/log/journal/ee830237f96043bf96b05e23fe4f8fc3) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:11:51.192863 systemd-journald[979]: Received client request to flush runtime journal. Sep 6 00:11:51.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.152711 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:11:51.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.154952 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:11:51.195242 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:11:51.160902 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:11:51.162225 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:11:51.164199 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:11:51.166763 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:11:51.171866 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:11:51.176017 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:11:51.193890 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:11:51.721515 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:11:51.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.726756 kernel: kauditd_printk_skb: 99 callbacks suppressed Sep 6 00:11:51.726862 kernel: audit: type=1130 audit(1757117511.722:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.726918 kernel: audit: type=1334 audit(1757117511.726:136): prog-id=21 op=LOAD Sep 6 00:11:51.726000 audit: BPF prog-id=21 op=LOAD Sep 6 00:11:51.727000 audit: BPF prog-id=22 op=LOAD Sep 6 00:11:51.728832 kernel: audit: type=1334 audit(1757117511.727:137): prog-id=22 op=LOAD Sep 6 00:11:51.728890 kernel: audit: type=1334 audit(1757117511.727:138): prog-id=7 op=UNLOAD Sep 6 00:11:51.728922 kernel: audit: type=1334 audit(1757117511.727:139): prog-id=8 op=UNLOAD Sep 6 00:11:51.727000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:11:51.727000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:11:51.728807 systemd[1]: Starting systemd-udevd.service... Sep 6 00:11:51.749231 systemd-udevd[1009]: Using default interface naming scheme 'v252'. Sep 6 00:11:51.765089 systemd[1]: Started systemd-udevd.service. Sep 6 00:11:51.775033 kernel: audit: type=1130 audit(1757117511.765:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.775072 kernel: audit: type=1334 audit(1757117511.768:141): prog-id=23 op=LOAD Sep 6 00:11:51.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.768000 audit: BPF prog-id=23 op=LOAD Sep 6 00:11:51.770098 systemd[1]: Starting systemd-networkd.service... Sep 6 00:11:51.783759 kernel: audit: type=1334 audit(1757117511.779:142): prog-id=24 op=LOAD Sep 6 00:11:51.783863 kernel: audit: type=1334 audit(1757117511.780:143): prog-id=25 op=LOAD Sep 6 00:11:51.783921 kernel: audit: type=1334 audit(1757117511.781:144): prog-id=26 op=LOAD Sep 6 00:11:51.779000 audit: BPF prog-id=24 op=LOAD Sep 6 00:11:51.780000 audit: BPF prog-id=25 op=LOAD Sep 6 00:11:51.781000 audit: BPF prog-id=26 op=LOAD Sep 6 00:11:51.783216 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:11:51.814423 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:11:51.820593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:11:51.825318 systemd[1]: Started systemd-userdbd.service. Sep 6 00:11:51.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.866757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:11:51.872737 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:11:51.877977 systemd-networkd[1016]: lo: Link UP Sep 6 00:11:51.878350 systemd-networkd[1016]: lo: Gained carrier Sep 6 00:11:51.879021 systemd-networkd[1016]: Enumeration completed Sep 6 00:11:51.879229 systemd[1]: Started systemd-networkd.service. Sep 6 00:11:51.879499 systemd-networkd[1016]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:11:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:51.881245 systemd-networkd[1016]: eth0: Link UP Sep 6 00:11:51.881365 systemd-networkd[1016]: eth0: Gained carrier Sep 6 00:11:51.891891 systemd-networkd[1016]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:11:51.895000 audit[1024]: AVC avc: denied { confidentiality } for pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:11:51.895000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a7df05d5c0 a1=338ec a2=7f86ee9d6bc5 a3=5 items=110 ppid=1009 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:11:51.895000 audit: CWD cwd="/" Sep 6 00:11:51.895000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=1 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=2 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=3 name=(null) inode=13584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=4 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=5 name=(null) inode=13585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=6 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=7 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=8 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=9 name=(null) inode=13587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=10 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=11 name=(null) inode=13588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=12 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=13 name=(null) inode=13589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=14 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=15 name=(null) inode=13590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=16 name=(null) inode=13586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=17 name=(null) inode=13591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=18 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=19 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=20 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=21 name=(null) inode=13593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=22 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=23 name=(null) inode=13594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=24 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=25 name=(null) inode=13595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=26 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=27 name=(null) inode=13596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=28 name=(null) inode=13592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=29 name=(null) inode=13597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=30 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=31 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=32 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=33 name=(null) inode=13599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=34 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=35 name=(null) inode=13600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=36 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=37 name=(null) inode=13601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=38 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=39 name=(null) inode=13602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=40 name=(null) inode=13598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=41 name=(null) inode=13603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=42 name=(null) inode=13583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=43 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=44 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=45 name=(null) inode=13605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=46 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=47 name=(null) inode=13606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=48 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=49 name=(null) inode=13607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=50 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=51 name=(null) inode=13608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=52 name=(null) inode=13604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=53 name=(null) inode=13609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=55 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=56 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=57 name=(null) inode=13611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=58 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=59 name=(null) inode=13612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=60 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=61 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=62 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=63 name=(null) inode=13614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=64 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=65 name=(null) inode=13615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=66 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=67 name=(null) inode=13616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=68 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=69 name=(null) inode=13617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=70 name=(null) inode=13613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=71 name=(null) inode=13618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=72 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=73 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=74 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=75 name=(null) inode=13620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=76 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=77 name=(null) inode=13621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=78 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=79 name=(null) inode=13622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=80 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=81 name=(null) inode=13623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=82 name=(null) inode=13619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=83 name=(null) inode=13624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=84 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=85 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=86 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=87 name=(null) inode=13626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=88 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=89 name=(null) inode=13627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=90 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=91 name=(null) inode=13628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=92 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=93 name=(null) inode=13629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=94 name=(null) inode=13625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=95 name=(null) inode=13630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=96 name=(null) inode=13610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=97 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=98 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=99 name=(null) inode=13632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=100 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=101 name=(null) inode=13633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=102 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=103 name=(null) inode=13634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=104 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=105 name=(null) inode=13635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=106 name=(null) inode=13631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=107 name=(null) inode=13636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PATH item=109 name=(null) inode=16065 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:11:51.895000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:11:51.933790 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:11:51.952769 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:11:51.971745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 00:11:51.972138 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 00:11:51.972266 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 00:11:52.005141 kernel: kvm: Nested Virtualization enabled Sep 6 00:11:52.005243 kernel: SVM: kvm: Nested Paging enabled Sep 6 00:11:52.005259 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 6 00:11:52.006739 kernel: SVM: Virtual GIF supported Sep 6 00:11:52.048774 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:11:52.076168 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:11:52.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.078685 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:11:52.096902 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:11:52.122815 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:11:52.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.137014 systemd[1]: Reached target cryptsetup.target. Sep 6 00:11:52.139331 systemd[1]: Starting lvm2-activation.service... Sep 6 00:11:52.145231 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:11:52.173610 systemd[1]: Finished lvm2-activation.service. Sep 6 00:11:52.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.178814 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:11:52.179671 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:11:52.179731 systemd[1]: Reached target local-fs.target. Sep 6 00:11:52.180525 systemd[1]: Reached target machines.target. Sep 6 00:11:52.182603 systemd[1]: Starting ldconfig.service... Sep 6 00:11:52.183604 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:11:52.183684 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:52.184592 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:11:52.186465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:11:52.189142 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:11:52.191205 systemd[1]: Starting systemd-sysext.service... Sep 6 00:11:52.192262 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Sep 6 00:11:52.193337 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:11:52.198301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:11:52.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.205827 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:11:52.209876 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:11:52.210017 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:11:52.252069 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Sep 6 00:11:52.252069 systemd-fsck[1055]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:11:52.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.253582 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:11:52.259851 systemd[1]: Mounting boot.mount... Sep 6 00:11:52.260752 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:11:52.624132 systemd[1]: Mounted boot.mount. Sep 6 00:11:52.631744 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:11:52.638449 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:11:52.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.640990 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:11:52.641592 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:11:52.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.649853 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:11:52.655057 (sd-sysext)[1060]: Using extensions 'kubernetes'. Sep 6 00:11:52.655530 (sd-sysext)[1060]: Merged extensions into '/usr'. Sep 6 00:11:52.675144 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:52.677067 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:11:52.678091 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:11:52.679689 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:11:52.681763 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:11:52.683850 systemd[1]: Starting modprobe@loop.service... Sep 6 00:11:52.684763 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:11:52.684912 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:52.685052 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:52.688633 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:11:52.690034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:11:52.690212 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:11:52.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.691707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:11:52.691940 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:11:52.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.709084 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:11:52.709222 systemd[1]: Finished modprobe@loop.service. Sep 6 00:11:52.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.710687 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:11:52.710935 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:11:52.712159 systemd[1]: Finished systemd-sysext.service. Sep 6 00:11:52.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:52.714690 systemd[1]: Starting ensure-sysext.service... Sep 6 00:11:52.717238 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:11:52.721585 systemd[1]: Reloading. Sep 6 00:11:52.736047 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:11:52.736673 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:11:52.739398 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:11:52.752408 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:11:52.836574 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2025-09-06T00:11:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:11:52.836607 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2025-09-06T00:11:52Z" level=info msg="torcx already run" Sep 6 00:11:52.928930 systemd-networkd[1016]: eth0: Gained IPv6LL Sep 6 00:11:52.942310 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:11:52.942340 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:11:52.959952 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:11:53.014000 audit: BPF prog-id=27 op=LOAD Sep 6 00:11:53.014000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:11:53.014000 audit: BPF prog-id=28 op=LOAD Sep 6 00:11:53.014000 audit: BPF prog-id=29 op=LOAD Sep 6 00:11:53.014000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:11:53.014000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:11:53.016000 audit: BPF prog-id=30 op=LOAD Sep 6 00:11:53.016000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:11:53.016000 audit: BPF prog-id=31 op=LOAD Sep 6 00:11:53.016000 audit: BPF prog-id=32 op=LOAD Sep 6 00:11:53.017000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:11:53.017000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:11:53.017000 audit: BPF prog-id=33 op=LOAD Sep 6 00:11:53.017000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:11:53.017000 audit: BPF prog-id=34 op=LOAD Sep 6 00:11:53.017000 audit: BPF prog-id=35 op=LOAD Sep 6 00:11:53.017000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:11:53.017000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:11:53.021393 systemd[1]: Finished ldconfig.service. Sep 6 00:11:53.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:53.023557 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:11:53.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:53.028622 systemd[1]: Starting audit-rules.service... Sep 6 00:11:53.030752 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:11:53.033105 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:11:53.034000 audit: BPF prog-id=36 op=LOAD Sep 6 00:11:53.036181 systemd[1]: Starting systemd-resolved.service... Sep 6 00:11:53.037000 audit: BPF prog-id=37 op=LOAD Sep 6 00:11:53.039222 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:11:53.041668 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:11:53.043413 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:11:53.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:11:53.046000 audit[1140]: SYSTEM_BOOT pid=1140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:11:53.051765 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.053650 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:11:53.056291 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:11:53.058924 systemd[1]: Starting modprobe@loop.service... Sep 6 00:11:53.060035 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.060184 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:53.060395 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:11:53.061000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:11:53.061000 audit[1152]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2a914750 a2=420 a3=0 items=0 ppid=1129 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:11:53.061000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:11:53.062486 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:11:53.062700 augenrules[1152]: No rules Sep 6 00:11:53.064369 systemd[1]: Finished audit-rules.service. Sep 6 00:11:53.065819 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:11:53.068675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:11:53.068864 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:11:53.070456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:11:53.070598 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:11:53.072270 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:11:53.072427 systemd[1]: Finished modprobe@loop.service. Sep 6 00:11:53.076497 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.078785 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:11:53.081501 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:11:53.084534 systemd[1]: Starting modprobe@loop.service... Sep 6 00:11:53.085605 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.085755 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:53.087616 systemd[1]: Starting systemd-update-done.service... Sep 6 00:11:53.088805 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:11:53.090628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:11:53.090809 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:11:53.092434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:11:53.092587 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:11:53.094480 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:11:53.094654 systemd[1]: Finished modprobe@loop.service. Sep 6 00:11:53.096277 systemd[1]: Finished systemd-update-done.service. Sep 6 00:11:53.101543 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.767060 systemd-timesyncd[1139]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:11:53.767135 systemd-timesyncd[1139]: Initial clock synchronization to Sat 2025-09-06 00:11:53.766938 UTC. Sep 6 00:11:53.767225 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:11:53.770356 systemd[1]: Starting modprobe@drm.service... Sep 6 00:11:53.773776 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:11:53.776784 systemd[1]: Starting modprobe@loop.service... Sep 6 00:11:53.777952 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.778118 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:53.779518 systemd-resolved[1136]: Positive Trust Anchors: Sep 6 00:11:53.779531 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:11:53.779563 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:11:53.780515 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:11:53.781984 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:11:53.783243 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:11:53.785184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:11:53.785313 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:11:53.787054 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:11:53.787210 systemd[1]: Finished modprobe@drm.service. Sep 6 00:11:53.788919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:11:53.789159 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:11:53.790965 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:11:53.791229 systemd[1]: Finished modprobe@loop.service. Sep 6 00:11:53.792927 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:11:53.793827 systemd-resolved[1136]: Defaulting to hostname 'linux'. Sep 6 00:11:53.795348 systemd[1]: Reached target time-set.target. Sep 6 00:11:53.796448 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:11:53.796497 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.796613 systemd[1]: Started systemd-resolved.service. Sep 6 00:11:53.797865 systemd[1]: Finished ensure-sysext.service. Sep 6 00:11:53.799511 systemd[1]: Reached target network.target. Sep 6 00:11:53.800561 systemd[1]: Reached target network-online.target. Sep 6 00:11:53.801477 systemd[1]: Reached target nss-lookup.target. Sep 6 00:11:53.802375 systemd[1]: Reached target sysinit.target. Sep 6 00:11:53.803280 systemd[1]: Started motdgen.path. Sep 6 00:11:53.804062 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:11:53.805534 systemd[1]: Started logrotate.timer. Sep 6 00:11:53.806385 systemd[1]: Started mdadm.timer. Sep 6 00:11:53.807121 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:11:53.808036 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:11:53.808060 systemd[1]: Reached target paths.target. Sep 6 00:11:53.808863 systemd[1]: Reached target timers.target. Sep 6 00:11:53.810289 systemd[1]: Listening on dbus.socket. Sep 6 00:11:53.812395 systemd[1]: Starting docker.socket... Sep 6 00:11:53.815720 systemd[1]: Listening on sshd.socket. Sep 6 00:11:53.816727 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:53.817181 systemd[1]: Listening on docker.socket. Sep 6 00:11:53.818052 systemd[1]: Reached target sockets.target. Sep 6 00:11:53.818874 systemd[1]: Reached target basic.target. Sep 6 00:11:53.819699 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.819722 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:11:53.820861 systemd[1]: Starting containerd.service... Sep 6 00:11:53.822911 systemd[1]: Starting dbus.service... Sep 6 00:11:53.824935 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:11:53.827717 systemd[1]: Starting extend-filesystems.service... Sep 6 00:11:53.829190 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:11:53.830446 systemd[1]: Starting kubelet.service... Sep 6 00:11:53.830567 jq[1172]: false Sep 6 00:11:53.832490 systemd[1]: Starting motdgen.service... Sep 6 00:11:53.834447 systemd[1]: Starting prepare-helm.service... Sep 6 00:11:53.836498 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:11:53.838702 systemd[1]: Starting sshd-keygen.service... Sep 6 00:11:53.842081 systemd[1]: Starting systemd-logind.service... Sep 6 00:11:53.843083 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:11:53.843170 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:11:53.843658 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:11:53.844457 systemd[1]: Starting update-engine.service... Sep 6 00:11:53.847021 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:11:53.850162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:11:53.850390 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:11:53.852487 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:11:53.852690 jq[1190]: true Sep 6 00:11:53.852734 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:11:53.860717 jq[1195]: true Sep 6 00:11:53.864850 extend-filesystems[1173]: Found loop1 Sep 6 00:11:53.864850 extend-filesystems[1173]: Found sr0 Sep 6 00:11:53.864850 extend-filesystems[1173]: Found vda Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda1 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda2 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda3 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found usr Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda4 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda6 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda7 Sep 6 00:11:53.876298 extend-filesystems[1173]: Found vda9 Sep 6 00:11:53.876298 extend-filesystems[1173]: Checking size of /dev/vda9 Sep 6 00:11:53.874054 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:53.895116 tar[1193]: linux-amd64/helm Sep 6 00:11:53.876643 dbus-daemon[1171]: [system] SELinux support is enabled Sep 6 00:11:53.874111 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:11:53.876817 systemd[1]: Started dbus.service. Sep 6 00:11:53.880893 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:11:53.880927 systemd[1]: Reached target system-config.target. Sep 6 00:11:53.882445 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:11:53.882492 systemd[1]: Reached target user-config.target. Sep 6 00:11:53.888513 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:11:53.888686 systemd[1]: Finished motdgen.service. Sep 6 00:11:53.907951 extend-filesystems[1173]: Resized partition /dev/vda9 Sep 6 00:11:53.915740 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:11:53.919762 update_engine[1185]: I0906 00:11:53.918762 1185 main.cc:92] Flatcar Update Engine starting Sep 6 00:11:53.920256 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:11:53.935415 systemd[1]: Started update-engine.service. Sep 6 00:11:53.938372 systemd[1]: Started locksmithd.service. Sep 6 00:11:53.941159 update_engine[1185]: I0906 00:11:53.939651 1185 update_check_scheduler.cc:74] Next update check in 5m49s Sep 6 00:11:53.953754 systemd-logind[1183]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:11:53.953775 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:11:53.954351 systemd-logind[1183]: New seat seat0. Sep 6 00:11:53.961831 systemd[1]: Started systemd-logind.service. Sep 6 00:11:53.969019 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:11:54.213395 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:11:54.213395 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:11:54.213395 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:11:54.217656 extend-filesystems[1173]: Resized filesystem in /dev/vda9 Sep 6 00:11:54.219333 env[1197]: time="2025-09-06T00:11:54.215433586Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:11:54.220042 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:11:54.220267 systemd[1]: Finished extend-filesystems.service. Sep 6 00:11:54.224434 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:11:54.225039 bash[1224]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:11:54.226073 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:11:54.235731 env[1197]: time="2025-09-06T00:11:54.235546806Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:11:54.235731 env[1197]: time="2025-09-06T00:11:54.235720972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237163497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237190498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237387297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237402175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237413126Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237422333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.237660 env[1197]: time="2025-09-06T00:11:54.237490952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.246183 env[1197]: time="2025-09-06T00:11:54.237771317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:11:54.246183 env[1197]: time="2025-09-06T00:11:54.237887555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:11:54.246183 env[1197]: time="2025-09-06T00:11:54.237901431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:11:54.246183 env[1197]: time="2025-09-06T00:11:54.237945955Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:11:54.246183 env[1197]: time="2025-09-06T00:11:54.237957236Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:11:54.254308 env[1197]: time="2025-09-06T00:11:54.254242006Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:11:54.254308 env[1197]: time="2025-09-06T00:11:54.254310424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:11:54.254391 env[1197]: time="2025-09-06T00:11:54.254324040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:11:54.254391 env[1197]: time="2025-09-06T00:11:54.254384673Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254458 env[1197]: time="2025-09-06T00:11:54.254431150Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254486 env[1197]: time="2025-09-06T00:11:54.254450346Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254486 env[1197]: time="2025-09-06T00:11:54.254474582Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254529 env[1197]: time="2025-09-06T00:11:54.254502755Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254529 env[1197]: time="2025-09-06T00:11:54.254517773Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254567 env[1197]: time="2025-09-06T00:11:54.254537009Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254567 env[1197]: time="2025-09-06T00:11:54.254553209Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.254620 env[1197]: time="2025-09-06T00:11:54.254579940Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:11:54.254823 env[1197]: time="2025-09-06T00:11:54.254792949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:11:54.254966 env[1197]: time="2025-09-06T00:11:54.254933523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:11:54.255435 env[1197]: time="2025-09-06T00:11:54.255403223Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:11:54.255484 env[1197]: time="2025-09-06T00:11:54.255443499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255484 env[1197]: time="2025-09-06T00:11:54.255458026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:11:54.255595 env[1197]: time="2025-09-06T00:11:54.255561160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255641 env[1197]: time="2025-09-06T00:11:54.255630910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255664 env[1197]: time="2025-09-06T00:11:54.255648123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255664 env[1197]: time="2025-09-06T00:11:54.255659745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255711 env[1197]: time="2025-09-06T00:11:54.255673220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255711 env[1197]: time="2025-09-06T00:11:54.255687477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255749 env[1197]: time="2025-09-06T00:11:54.255714928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255749 env[1197]: time="2025-09-06T00:11:54.255726139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.255749 env[1197]: time="2025-09-06T00:11:54.255739013Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255901047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255922858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255934319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255944599Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255959276Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255968514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.255986267Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:11:54.256763 env[1197]: time="2025-09-06T00:11:54.256047682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:11:54.256959 env[1197]: time="2025-09-06T00:11:54.256368674Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:11:54.256959 env[1197]: time="2025-09-06T00:11:54.256454164Z" level=info msg="Connect containerd service" Sep 6 00:11:54.256959 env[1197]: time="2025-09-06T00:11:54.256519517Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.257471071Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.257977241Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258032334Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258095062Z" level=info msg="containerd successfully booted in 0.270278s" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258328920Z" level=info msg="Start subscribing containerd event" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258386448Z" level=info msg="Start recovering state" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258454325Z" level=info msg="Start event monitor" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258469093Z" level=info msg="Start snapshots syncer" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258478901Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:11:54.258979 env[1197]: time="2025-09-06T00:11:54.258485514Z" level=info msg="Start streaming server" Sep 6 00:11:54.258240 systemd[1]: Started containerd.service. Sep 6 00:11:54.261828 systemd[1]: Finished sshd-keygen.service. Sep 6 00:11:54.264411 systemd[1]: Starting issuegen.service... Sep 6 00:11:54.268768 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:11:54.271147 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:11:54.271307 systemd[1]: Finished issuegen.service. Sep 6 00:11:54.273633 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:11:54.280492 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:11:54.283067 systemd[1]: Started getty@tty1.service. Sep 6 00:11:54.284958 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:11:54.286215 systemd[1]: Reached target getty.target. Sep 6 00:11:54.687234 tar[1193]: linux-amd64/LICENSE Sep 6 00:11:54.687435 tar[1193]: linux-amd64/README.md Sep 6 00:11:54.691791 systemd[1]: Finished prepare-helm.service. Sep 6 00:11:55.411941 systemd[1]: Started kubelet.service. Sep 6 00:11:55.414546 systemd[1]: Reached target multi-user.target. Sep 6 00:11:55.417161 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:11:55.424926 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:11:55.425099 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:11:55.426217 systemd[1]: Startup finished in 861ms (kernel) + 6.635s (initrd) + 8.165s (userspace) = 15.663s. Sep 6 00:11:56.323371 kubelet[1253]: E0906 00:11:56.323271 1253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:11:56.325239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:11:56.325494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:11:56.325836 systemd[1]: kubelet.service: Consumed 1.947s CPU time. Sep 6 00:12:03.418696 systemd[1]: Created slice system-sshd.slice. Sep 6 00:12:03.419768 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:60572.service. Sep 6 00:12:03.456454 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:12:03.457939 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.466223 systemd[1]: Created slice user-500.slice. Sep 6 00:12:03.467332 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:12:03.468828 systemd-logind[1183]: New session 1 of user core. Sep 6 00:12:03.475689 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:12:03.477010 systemd[1]: Starting user@500.service... Sep 6 00:12:03.479757 (systemd)[1265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.551074 systemd[1265]: Queued start job for default target default.target. Sep 6 00:12:03.551589 systemd[1265]: Reached target paths.target. Sep 6 00:12:03.551606 systemd[1265]: Reached target sockets.target. Sep 6 00:12:03.551618 systemd[1265]: Reached target timers.target. Sep 6 00:12:03.551629 systemd[1265]: Reached target basic.target. Sep 6 00:12:03.551666 systemd[1265]: Reached target default.target. Sep 6 00:12:03.551688 systemd[1265]: Startup finished in 66ms. Sep 6 00:12:03.551773 systemd[1]: Started user@500.service. Sep 6 00:12:03.552831 systemd[1]: Started session-1.scope. Sep 6 00:12:03.605334 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:60582.service. Sep 6 00:12:03.638338 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 60582 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:12:03.639374 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.643893 systemd-logind[1183]: New session 2 of user core. Sep 6 00:12:03.645722 systemd[1]: Started session-2.scope. Sep 6 00:12:03.699240 sshd[1274]: pam_unix(sshd:session): session closed for user core Sep 6 00:12:03.702107 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:60582.service: Deactivated successfully. Sep 6 00:12:03.702655 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:12:03.703149 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:12:03.704285 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:60598.service. Sep 6 00:12:03.704935 systemd-logind[1183]: Removed session 2. Sep 6 00:12:03.737379 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 60598 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:12:03.738852 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.742388 systemd-logind[1183]: New session 3 of user core. Sep 6 00:12:03.743591 systemd[1]: Started session-3.scope. Sep 6 00:12:03.793038 sshd[1280]: pam_unix(sshd:session): session closed for user core Sep 6 00:12:03.796554 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:60598.service: Deactivated successfully. Sep 6 00:12:03.797253 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:12:03.797849 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:12:03.799263 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:60612.service. Sep 6 00:12:03.800536 systemd-logind[1183]: Removed session 3. Sep 6 00:12:03.832106 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 60612 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:12:03.833701 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.838602 systemd-logind[1183]: New session 4 of user core. Sep 6 00:12:03.839414 systemd[1]: Started session-4.scope. Sep 6 00:12:03.892594 sshd[1286]: pam_unix(sshd:session): session closed for user core Sep 6 00:12:03.895325 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:60612.service: Deactivated successfully. Sep 6 00:12:03.895889 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:12:03.896405 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:12:03.897603 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:60620.service. Sep 6 00:12:03.898344 systemd-logind[1183]: Removed session 4. Sep 6 00:12:03.930502 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 60620 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:12:03.931733 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:12:03.934955 systemd-logind[1183]: New session 5 of user core. Sep 6 00:12:03.935865 systemd[1]: Started session-5.scope. Sep 6 00:12:03.990452 sudo[1295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:12:03.990642 sudo[1295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:12:04.026489 systemd[1]: Starting docker.service... Sep 6 00:12:04.089076 env[1307]: time="2025-09-06T00:12:04.089012531Z" level=info msg="Starting up" Sep 6 00:12:04.090435 env[1307]: time="2025-09-06T00:12:04.090393180Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:12:04.090435 env[1307]: time="2025-09-06T00:12:04.090425852Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:12:04.090513 env[1307]: time="2025-09-06T00:12:04.090476747Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:12:04.090513 env[1307]: time="2025-09-06T00:12:04.090495662Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:12:04.092731 env[1307]: time="2025-09-06T00:12:04.092704134Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:12:04.092731 env[1307]: time="2025-09-06T00:12:04.092720966Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:12:04.092809 env[1307]: time="2025-09-06T00:12:04.092741234Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:12:04.092809 env[1307]: time="2025-09-06T00:12:04.092757564Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:12:04.260086 env[1307]: time="2025-09-06T00:12:04.259977361Z" level=info msg="Loading containers: start." Sep 6 00:12:04.430028 kernel: Initializing XFRM netlink socket Sep 6 00:12:04.458323 env[1307]: time="2025-09-06T00:12:04.458261014Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:12:04.507767 systemd-networkd[1016]: docker0: Link UP Sep 6 00:12:04.532161 env[1307]: time="2025-09-06T00:12:04.532082846Z" level=info msg="Loading containers: done." Sep 6 00:12:04.547122 env[1307]: time="2025-09-06T00:12:04.547068830Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:12:04.547272 env[1307]: time="2025-09-06T00:12:04.547251402Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:12:04.547391 env[1307]: time="2025-09-06T00:12:04.547364535Z" level=info msg="Daemon has completed initialization" Sep 6 00:12:04.564430 systemd[1]: Started docker.service. Sep 6 00:12:04.572509 env[1307]: time="2025-09-06T00:12:04.572425703Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:12:05.654189 env[1197]: time="2025-09-06T00:12:05.654143951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:12:06.395689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:12:06.395934 systemd[1]: Stopped kubelet.service. Sep 6 00:12:06.395981 systemd[1]: kubelet.service: Consumed 1.947s CPU time. Sep 6 00:12:06.397826 systemd[1]: Starting kubelet.service... Sep 6 00:12:06.550367 systemd[1]: Started kubelet.service. Sep 6 00:12:06.750013 kubelet[1440]: E0906 00:12:06.749853 1440 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:12:06.752895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:12:06.753113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:12:06.869096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105513042.mount: Deactivated successfully. Sep 6 00:12:08.911408 env[1197]: time="2025-09-06T00:12:08.911319578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:08.914348 env[1197]: time="2025-09-06T00:12:08.914268809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:08.917387 env[1197]: time="2025-09-06T00:12:08.917118022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:08.921303 env[1197]: time="2025-09-06T00:12:08.921142720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:08.922129 env[1197]: time="2025-09-06T00:12:08.922091840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 00:12:08.923398 env[1197]: time="2025-09-06T00:12:08.923363505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:12:11.969431 env[1197]: time="2025-09-06T00:12:11.969309831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:11.971315 env[1197]: time="2025-09-06T00:12:11.971281369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:11.973189 env[1197]: time="2025-09-06T00:12:11.973162977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:11.974776 env[1197]: time="2025-09-06T00:12:11.974749312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:11.975731 env[1197]: time="2025-09-06T00:12:11.975628110Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 00:12:11.976814 env[1197]: time="2025-09-06T00:12:11.976758430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:12:14.010570 env[1197]: time="2025-09-06T00:12:14.010476007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:14.012573 env[1197]: time="2025-09-06T00:12:14.012508668Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:14.016032 env[1197]: time="2025-09-06T00:12:14.015945163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:14.017806 env[1197]: time="2025-09-06T00:12:14.017762341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:14.018427 env[1197]: time="2025-09-06T00:12:14.018388435Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 00:12:14.019261 env[1197]: time="2025-09-06T00:12:14.019213763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:12:15.869911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326045976.mount: Deactivated successfully. Sep 6 00:12:16.895703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:12:16.895898 systemd[1]: Stopped kubelet.service. Sep 6 00:12:16.897903 systemd[1]: Starting kubelet.service... Sep 6 00:12:18.106219 systemd[1]: Started kubelet.service. Sep 6 00:12:19.635548 kubelet[1453]: E0906 00:12:19.635473 1453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:12:19.637342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:12:19.637478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:12:21.116638 env[1197]: time="2025-09-06T00:12:21.116560401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:21.234752 env[1197]: time="2025-09-06T00:12:21.234662164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:21.453323 env[1197]: time="2025-09-06T00:12:21.453266626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:21.514940 env[1197]: time="2025-09-06T00:12:21.514857020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:21.515452 env[1197]: time="2025-09-06T00:12:21.515423883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 00:12:21.516108 env[1197]: time="2025-09-06T00:12:21.516081115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:12:22.848746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493931907.mount: Deactivated successfully. Sep 6 00:12:24.088628 env[1197]: time="2025-09-06T00:12:24.088567676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.092330 env[1197]: time="2025-09-06T00:12:24.092282583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.096164 env[1197]: time="2025-09-06T00:12:24.096130559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.099017 env[1197]: time="2025-09-06T00:12:24.098947692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.099909 env[1197]: time="2025-09-06T00:12:24.099870603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 00:12:24.100582 env[1197]: time="2025-09-06T00:12:24.100534277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:12:24.943452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349882299.mount: Deactivated successfully. Sep 6 00:12:24.949728 env[1197]: time="2025-09-06T00:12:24.949667616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.951412 env[1197]: time="2025-09-06T00:12:24.951391739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.952909 env[1197]: time="2025-09-06T00:12:24.952841688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.956010 env[1197]: time="2025-09-06T00:12:24.955807530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:24.956499 env[1197]: time="2025-09-06T00:12:24.956462869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:12:24.957074 env[1197]: time="2025-09-06T00:12:24.957045692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:12:25.531061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778427060.mount: Deactivated successfully. Sep 6 00:12:29.645702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 00:12:29.645936 systemd[1]: Stopped kubelet.service. Sep 6 00:12:29.647651 systemd[1]: Starting kubelet.service... Sep 6 00:12:29.742319 systemd[1]: Started kubelet.service. Sep 6 00:12:29.871724 kubelet[1464]: E0906 00:12:29.871646 1464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:12:29.873608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:12:29.873728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:12:30.398927 env[1197]: time="2025-09-06T00:12:30.398859481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:30.400965 env[1197]: time="2025-09-06T00:12:30.400927203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:30.403115 env[1197]: time="2025-09-06T00:12:30.403059309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:30.404810 env[1197]: time="2025-09-06T00:12:30.404766991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:30.405569 env[1197]: time="2025-09-06T00:12:30.405524076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 00:12:32.520370 systemd[1]: Stopped kubelet.service. Sep 6 00:12:32.522549 systemd[1]: Starting kubelet.service... Sep 6 00:12:32.545628 systemd[1]: Reloading. Sep 6 00:12:32.628128 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-09-06T00:12:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:12:32.628162 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-09-06T00:12:32Z" level=info msg="torcx already run" Sep 6 00:12:33.295401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:12:33.295420 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:12:33.313110 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:12:33.392938 systemd[1]: Started kubelet.service. Sep 6 00:12:33.394452 systemd[1]: Stopping kubelet.service... Sep 6 00:12:33.394714 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:12:33.394896 systemd[1]: Stopped kubelet.service. Sep 6 00:12:33.396547 systemd[1]: Starting kubelet.service... Sep 6 00:12:33.487870 systemd[1]: Started kubelet.service. Sep 6 00:12:33.660543 kubelet[1568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:12:33.660543 kubelet[1568]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:12:33.660543 kubelet[1568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:12:33.660913 kubelet[1568]: I0906 00:12:33.660597 1568 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:12:33.881727 kubelet[1568]: I0906 00:12:33.881659 1568 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:12:33.881727 kubelet[1568]: I0906 00:12:33.881712 1568 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:12:33.882221 kubelet[1568]: I0906 00:12:33.882198 1568 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:12:33.899909 kubelet[1568]: E0906 00:12:33.899848 1568 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:33.900728 kubelet[1568]: I0906 00:12:33.900704 1568 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:12:33.907620 kubelet[1568]: E0906 00:12:33.907590 1568 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:12:33.907620 kubelet[1568]: I0906 00:12:33.907618 1568 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:12:33.913063 kubelet[1568]: I0906 00:12:33.912969 1568 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:12:33.914092 kubelet[1568]: I0906 00:12:33.914044 1568 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:12:33.914274 kubelet[1568]: I0906 00:12:33.914233 1568 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:12:33.914458 kubelet[1568]: I0906 00:12:33.914267 1568 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:12:33.914567 kubelet[1568]: I0906 00:12:33.914469 1568 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:12:33.914567 kubelet[1568]: I0906 00:12:33.914478 1568 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:12:33.914621 kubelet[1568]: I0906 00:12:33.914583 1568 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:12:33.924412 kubelet[1568]: I0906 00:12:33.924386 1568 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:12:33.924412 kubelet[1568]: I0906 00:12:33.924414 1568 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:12:33.924494 kubelet[1568]: I0906 00:12:33.924456 1568 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:12:33.924494 kubelet[1568]: I0906 00:12:33.924483 1568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:12:33.926151 kubelet[1568]: W0906 00:12:33.926100 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:33.928318 kubelet[1568]: E0906 00:12:33.926161 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:33.928868 kubelet[1568]: W0906 00:12:33.928809 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:33.928988 kubelet[1568]: E0906 00:12:33.928963 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:33.937513 kubelet[1568]: I0906 00:12:33.937469 1568 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:12:33.937985 kubelet[1568]: I0906 00:12:33.937959 1568 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:12:33.938560 kubelet[1568]: W0906 00:12:33.938533 1568 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:12:33.943272 kubelet[1568]: I0906 00:12:33.943237 1568 server.go:1274] "Started kubelet" Sep 6 00:12:33.944025 kubelet[1568]: I0906 00:12:33.943922 1568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:12:33.944271 kubelet[1568]: I0906 00:12:33.943968 1568 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:12:33.944526 kubelet[1568]: I0906 00:12:33.944496 1568 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:12:33.945305 kubelet[1568]: I0906 00:12:33.945282 1568 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:12:33.946968 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:12:33.947178 kubelet[1568]: I0906 00:12:33.947162 1568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:12:33.947707 kubelet[1568]: I0906 00:12:33.947684 1568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:12:33.948416 kubelet[1568]: E0906 00:12:33.948395 1568 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:12:33.948673 kubelet[1568]: I0906 00:12:33.948658 1568 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:12:33.948856 kubelet[1568]: I0906 00:12:33.948839 1568 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:12:33.949026 kubelet[1568]: I0906 00:12:33.949012 1568 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:12:33.949587 kubelet[1568]: W0906 00:12:33.949536 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:33.949587 kubelet[1568]: E0906 00:12:33.949596 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:33.949710 kubelet[1568]: I0906 00:12:33.949636 1568 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:12:33.949710 kubelet[1568]: I0906 00:12:33.949699 1568 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:12:33.950716 kubelet[1568]: E0906 00:12:33.950034 1568 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:12:33.950716 kubelet[1568]: E0906 00:12:33.950120 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Sep 6 00:12:33.951153 kubelet[1568]: I0906 00:12:33.951129 1568 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:12:33.952792 kubelet[1568]: E0906 00:12:33.951299 1568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862891b0f635d89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:12:33.943207305 +0000 UTC m=+0.447977591,LastTimestamp:2025-09-06 00:12:33.943207305 +0000 UTC m=+0.447977591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:12:33.961472 kubelet[1568]: I0906 00:12:33.961440 1568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:12:33.962391 kubelet[1568]: I0906 00:12:33.962366 1568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:12:33.962451 kubelet[1568]: I0906 00:12:33.962404 1568 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:12:33.962451 kubelet[1568]: I0906 00:12:33.962429 1568 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:12:33.962494 kubelet[1568]: E0906 00:12:33.962467 1568 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:12:33.965606 kubelet[1568]: I0906 00:12:33.965585 1568 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:12:33.965606 kubelet[1568]: I0906 00:12:33.965599 1568 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:12:33.965606 kubelet[1568]: I0906 00:12:33.965617 1568 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:12:33.966416 kubelet[1568]: W0906 00:12:33.966356 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:33.966475 kubelet[1568]: E0906 00:12:33.966430 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:34.050548 kubelet[1568]: E0906 00:12:34.050493 1568 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:12:34.062642 kubelet[1568]: E0906 00:12:34.062593 1568 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:12:34.151066 kubelet[1568]: E0906 00:12:34.151029 1568 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:12:34.151429 kubelet[1568]: E0906 00:12:34.151388 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Sep 6 00:12:34.252046 kubelet[1568]: E0906 00:12:34.251876 1568 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:12:34.263137 kubelet[1568]: E0906 00:12:34.263081 1568 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:12:34.275770 kubelet[1568]: I0906 00:12:34.275731 1568 policy_none.go:49] "None policy: Start" Sep 6 00:12:34.276703 kubelet[1568]: I0906 00:12:34.276683 1568 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:12:34.276752 kubelet[1568]: I0906 00:12:34.276717 1568 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:12:34.281877 systemd[1]: Created slice kubepods.slice. Sep 6 00:12:34.286562 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:12:34.289464 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:12:34.302657 kubelet[1568]: I0906 00:12:34.302624 1568 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:12:34.302816 kubelet[1568]: I0906 00:12:34.302788 1568 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:12:34.302816 kubelet[1568]: I0906 00:12:34.302810 1568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:12:34.303167 kubelet[1568]: I0906 00:12:34.303149 1568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:12:34.307272 kubelet[1568]: E0906 00:12:34.307231 1568 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:12:34.404549 kubelet[1568]: I0906 00:12:34.404490 1568 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:34.404857 kubelet[1568]: E0906 00:12:34.404832 1568 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 6 00:12:34.552573 kubelet[1568]: E0906 00:12:34.552421 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Sep 6 00:12:34.606558 kubelet[1568]: I0906 00:12:34.606520 1568 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:34.606984 kubelet[1568]: E0906 00:12:34.606934 1568 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 6 00:12:34.671406 systemd[1]: Created slice kubepods-burstable-poda36c0195b2867418aa50a7dcd99b3b76.slice. Sep 6 00:12:34.685191 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 6 00:12:34.693006 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 6 00:12:34.752847 kubelet[1568]: I0906 00:12:34.752784 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:34.752847 kubelet[1568]: I0906 00:12:34.752847 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:34.753577 kubelet[1568]: I0906 00:12:34.752884 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:34.753577 kubelet[1568]: I0906 00:12:34.752912 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:12:34.753577 kubelet[1568]: I0906 00:12:34.752931 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:34.753577 kubelet[1568]: I0906 00:12:34.752951 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:34.753577 kubelet[1568]: I0906 00:12:34.752969 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:34.753697 kubelet[1568]: I0906 00:12:34.753007 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:34.753697 kubelet[1568]: I0906 00:12:34.753030 1568 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:34.857358 kubelet[1568]: W0906 00:12:34.857150 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:34.857358 kubelet[1568]: E0906 00:12:34.857267 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:34.983493 kubelet[1568]: E0906 00:12:34.983433 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:34.984221 env[1197]: time="2025-09-06T00:12:34.984187036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a36c0195b2867418aa50a7dcd99b3b76,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:34.987292 kubelet[1568]: E0906 00:12:34.987263 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:34.987581 env[1197]: time="2025-09-06T00:12:34.987547198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:34.994734 kubelet[1568]: E0906 00:12:34.994709 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:34.994985 env[1197]: time="2025-09-06T00:12:34.994959761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:35.007762 kubelet[1568]: I0906 00:12:35.007665 1568 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:35.007915 kubelet[1568]: E0906 00:12:35.007884 1568 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 6 00:12:35.110048 kubelet[1568]: W0906 00:12:35.109807 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:35.110048 kubelet[1568]: E0906 00:12:35.109934 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:35.234239 kubelet[1568]: W0906 00:12:35.234153 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:35.234434 kubelet[1568]: E0906 00:12:35.234259 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:35.353197 kubelet[1568]: E0906 00:12:35.353129 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Sep 6 00:12:35.433122 kubelet[1568]: W0906 00:12:35.433049 1568 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Sep 6 00:12:35.433122 kubelet[1568]: E0906 00:12:35.433119 1568 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:35.810050 kubelet[1568]: I0906 00:12:35.809886 1568 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:35.810637 kubelet[1568]: E0906 00:12:35.810580 1568 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Sep 6 00:12:35.982576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066692875.mount: Deactivated successfully. Sep 6 00:12:35.990321 env[1197]: time="2025-09-06T00:12:35.990254223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:35.993242 env[1197]: time="2025-09-06T00:12:35.993211647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:35.994846 env[1197]: time="2025-09-06T00:12:35.994812683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:35.995805 env[1197]: time="2025-09-06T00:12:35.995756714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:35.997569 env[1197]: time="2025-09-06T00:12:35.997531782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:35.998792 env[1197]: time="2025-09-06T00:12:35.998762681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.000201 env[1197]: time="2025-09-06T00:12:36.000176009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.002411 env[1197]: time="2025-09-06T00:12:36.002385293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.004727 env[1197]: time="2025-09-06T00:12:36.004682703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.005588 env[1197]: time="2025-09-06T00:12:36.005563452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.006743 kubelet[1568]: E0906 00:12:36.006706 1568 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:12:36.007209 env[1197]: time="2025-09-06T00:12:36.007174313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.008530 env[1197]: time="2025-09-06T00:12:36.008479221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:12:36.038191 env[1197]: time="2025-09-06T00:12:36.037966670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:36.038191 env[1197]: time="2025-09-06T00:12:36.038147504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:36.038191 env[1197]: time="2025-09-06T00:12:36.038157814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:36.038424 env[1197]: time="2025-09-06T00:12:36.038303601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/195dee9f75426f4d7b46c52235d8f1998633492d298d2898690ceab72b5e7ec7 pid=1611 runtime=io.containerd.runc.v2 Sep 6 00:12:36.043509 env[1197]: time="2025-09-06T00:12:36.042308176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:36.043509 env[1197]: time="2025-09-06T00:12:36.042385294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:36.043509 env[1197]: time="2025-09-06T00:12:36.042410461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:36.043509 env[1197]: time="2025-09-06T00:12:36.042556420Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c3f039386e2e907ddcac069f581302eb7741231a43eba9491e6486fe7c746e4 pid=1630 runtime=io.containerd.runc.v2 Sep 6 00:12:36.043824 env[1197]: time="2025-09-06T00:12:36.043752410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:36.043824 env[1197]: time="2025-09-06T00:12:36.043781736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:36.043824 env[1197]: time="2025-09-06T00:12:36.043791263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:36.044070 env[1197]: time="2025-09-06T00:12:36.044029178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e935a522dfb8d1a087e3822634018fe6e55397e1b3d67d7a578640363795fc7e pid=1629 runtime=io.containerd.runc.v2 Sep 6 00:12:36.072714 systemd[1]: Started cri-containerd-195dee9f75426f4d7b46c52235d8f1998633492d298d2898690ceab72b5e7ec7.scope. Sep 6 00:12:36.077656 systemd[1]: Started cri-containerd-e935a522dfb8d1a087e3822634018fe6e55397e1b3d67d7a578640363795fc7e.scope. Sep 6 00:12:36.087289 systemd[1]: Started cri-containerd-3c3f039386e2e907ddcac069f581302eb7741231a43eba9491e6486fe7c746e4.scope. Sep 6 00:12:36.189976 env[1197]: time="2025-09-06T00:12:36.189914680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"195dee9f75426f4d7b46c52235d8f1998633492d298d2898690ceab72b5e7ec7\"" Sep 6 00:12:36.194432 kubelet[1568]: E0906 00:12:36.194397 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:36.196268 env[1197]: time="2025-09-06T00:12:36.196223679Z" level=info msg="CreateContainer within sandbox \"195dee9f75426f4d7b46c52235d8f1998633492d298d2898690ceab72b5e7ec7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:12:36.198517 env[1197]: time="2025-09-06T00:12:36.198474931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e935a522dfb8d1a087e3822634018fe6e55397e1b3d67d7a578640363795fc7e\"" Sep 6 00:12:36.198976 kubelet[1568]: E0906 00:12:36.198947 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:36.200397 env[1197]: time="2025-09-06T00:12:36.200368602Z" level=info msg="CreateContainer within sandbox \"e935a522dfb8d1a087e3822634018fe6e55397e1b3d67d7a578640363795fc7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:12:36.210713 env[1197]: time="2025-09-06T00:12:36.210661315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a36c0195b2867418aa50a7dcd99b3b76,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c3f039386e2e907ddcac069f581302eb7741231a43eba9491e6486fe7c746e4\"" Sep 6 00:12:36.211869 kubelet[1568]: E0906 00:12:36.211842 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:36.213912 env[1197]: time="2025-09-06T00:12:36.213879801Z" level=info msg="CreateContainer within sandbox \"3c3f039386e2e907ddcac069f581302eb7741231a43eba9491e6486fe7c746e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:12:36.219121 env[1197]: time="2025-09-06T00:12:36.219094082Z" level=info msg="CreateContainer within sandbox \"195dee9f75426f4d7b46c52235d8f1998633492d298d2898690ceab72b5e7ec7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85d2f60d4be312599e0facc3adba28a714ab2b34d9bbbf3ff72ea54f009329f3\"" Sep 6 00:12:36.219798 env[1197]: time="2025-09-06T00:12:36.219752908Z" level=info msg="StartContainer for \"85d2f60d4be312599e0facc3adba28a714ab2b34d9bbbf3ff72ea54f009329f3\"" Sep 6 00:12:36.226624 env[1197]: time="2025-09-06T00:12:36.226558964Z" level=info msg="CreateContainer within sandbox \"e935a522dfb8d1a087e3822634018fe6e55397e1b3d67d7a578640363795fc7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd4438f5ebd0b90b32cb2150d664603dd98281d5e5aadf99851e22a1dda5a0f4\"" Sep 6 00:12:36.227164 env[1197]: time="2025-09-06T00:12:36.227129822Z" level=info msg="StartContainer for \"cd4438f5ebd0b90b32cb2150d664603dd98281d5e5aadf99851e22a1dda5a0f4\"" Sep 6 00:12:36.231865 env[1197]: time="2025-09-06T00:12:36.231818522Z" level=info msg="CreateContainer within sandbox \"3c3f039386e2e907ddcac069f581302eb7741231a43eba9491e6486fe7c746e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37f9dfb40d19428d45e81faa0f3190b0e43c8b499f501146487cfc112f4bd5a3\"" Sep 6 00:12:36.232338 env[1197]: time="2025-09-06T00:12:36.232305931Z" level=info msg="StartContainer for \"37f9dfb40d19428d45e81faa0f3190b0e43c8b499f501146487cfc112f4bd5a3\"" Sep 6 00:12:36.236363 systemd[1]: Started cri-containerd-85d2f60d4be312599e0facc3adba28a714ab2b34d9bbbf3ff72ea54f009329f3.scope. Sep 6 00:12:36.252342 systemd[1]: Started cri-containerd-cd4438f5ebd0b90b32cb2150d664603dd98281d5e5aadf99851e22a1dda5a0f4.scope. Sep 6 00:12:36.258845 systemd[1]: Started cri-containerd-37f9dfb40d19428d45e81faa0f3190b0e43c8b499f501146487cfc112f4bd5a3.scope. Sep 6 00:12:36.288440 env[1197]: time="2025-09-06T00:12:36.288370249Z" level=info msg="StartContainer for \"85d2f60d4be312599e0facc3adba28a714ab2b34d9bbbf3ff72ea54f009329f3\" returns successfully" Sep 6 00:12:36.332916 env[1197]: time="2025-09-06T00:12:36.332799374Z" level=info msg="StartContainer for \"cd4438f5ebd0b90b32cb2150d664603dd98281d5e5aadf99851e22a1dda5a0f4\" returns successfully" Sep 6 00:12:36.339755 env[1197]: time="2025-09-06T00:12:36.339705891Z" level=info msg="StartContainer for \"37f9dfb40d19428d45e81faa0f3190b0e43c8b499f501146487cfc112f4bd5a3\" returns successfully" Sep 6 00:12:36.970802 kubelet[1568]: E0906 00:12:36.970767 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:36.972286 kubelet[1568]: E0906 00:12:36.972267 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:36.973529 kubelet[1568]: E0906 00:12:36.973511 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:37.412809 kubelet[1568]: I0906 00:12:37.412769 1568 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:37.976176 kubelet[1568]: E0906 00:12:37.976132 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:38.766192 update_engine[1185]: I0906 00:12:38.766098 1185 update_attempter.cc:509] Updating boot flags... Sep 6 00:12:38.783388 kubelet[1568]: E0906 00:12:38.783336 1568 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:12:38.873227 kubelet[1568]: I0906 00:12:38.873183 1568 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:12:38.873227 kubelet[1568]: E0906 00:12:38.873232 1568 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:12:38.927177 kubelet[1568]: I0906 00:12:38.927134 1568 apiserver.go:52] "Watching apiserver" Sep 6 00:12:38.949318 kubelet[1568]: I0906 00:12:38.949267 1568 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:12:40.931173 systemd[1]: Reloading. Sep 6 00:12:41.021802 /usr/lib/systemd/system-generators/torcx-generator[1877]: time="2025-09-06T00:12:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:12:41.021842 /usr/lib/systemd/system-generators/torcx-generator[1877]: time="2025-09-06T00:12:41Z" level=info msg="torcx already run" Sep 6 00:12:41.066038 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:12:41.066058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:12:41.084239 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:12:41.180868 systemd[1]: Stopping kubelet.service... Sep 6 00:12:41.203498 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:12:41.203786 systemd[1]: Stopped kubelet.service. Sep 6 00:12:41.205904 systemd[1]: Starting kubelet.service... Sep 6 00:12:41.303530 systemd[1]: Started kubelet.service. Sep 6 00:12:41.347618 kubelet[1922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:12:41.348313 kubelet[1922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:12:41.348313 kubelet[1922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:12:41.348441 kubelet[1922]: I0906 00:12:41.348396 1922 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:12:41.355234 kubelet[1922]: I0906 00:12:41.355171 1922 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:12:41.355234 kubelet[1922]: I0906 00:12:41.355215 1922 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:12:41.355523 kubelet[1922]: I0906 00:12:41.355496 1922 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:12:41.356638 kubelet[1922]: I0906 00:12:41.356617 1922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:12:41.358230 kubelet[1922]: I0906 00:12:41.358205 1922 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:12:41.361643 kubelet[1922]: E0906 00:12:41.361605 1922 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:12:41.361705 kubelet[1922]: I0906 00:12:41.361645 1922 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:12:41.366020 kubelet[1922]: I0906 00:12:41.365966 1922 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:12:41.366139 kubelet[1922]: I0906 00:12:41.366119 1922 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:12:41.366298 kubelet[1922]: I0906 00:12:41.366252 1922 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:12:41.366897 kubelet[1922]: I0906 00:12:41.366302 1922 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:12:41.367042 kubelet[1922]: I0906 00:12:41.366908 1922 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:12:41.367042 kubelet[1922]: I0906 00:12:41.366927 1922 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:12:41.367042 kubelet[1922]: I0906 00:12:41.366970 1922 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:12:41.367111 kubelet[1922]: I0906 00:12:41.367100 1922 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:12:41.367138 kubelet[1922]: I0906 00:12:41.367115 1922 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:12:41.367159 kubelet[1922]: I0906 00:12:41.367147 1922 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:12:41.367182 kubelet[1922]: I0906 00:12:41.367159 1922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:12:41.371631 kubelet[1922]: I0906 00:12:41.367985 1922 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:12:41.371631 kubelet[1922]: I0906 00:12:41.368746 1922 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:12:41.372296 kubelet[1922]: I0906 00:12:41.372270 1922 server.go:1274] "Started kubelet" Sep 6 00:12:41.375839 kubelet[1922]: I0906 00:12:41.373759 1922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:12:41.375839 kubelet[1922]: I0906 00:12:41.373793 1922 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:12:41.375839 kubelet[1922]: I0906 00:12:41.374311 1922 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:12:41.378200 kubelet[1922]: I0906 00:12:41.378175 1922 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:12:41.379294 kubelet[1922]: I0906 00:12:41.379254 1922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:12:41.379656 kubelet[1922]: I0906 00:12:41.379628 1922 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:12:41.380259 kubelet[1922]: I0906 00:12:41.380229 1922 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:12:41.380367 kubelet[1922]: I0906 00:12:41.380349 1922 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:12:41.380484 kubelet[1922]: I0906 00:12:41.380465 1922 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:12:41.380613 kubelet[1922]: E0906 00:12:41.380584 1922 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:12:41.381047 kubelet[1922]: I0906 00:12:41.381028 1922 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:12:41.381157 kubelet[1922]: I0906 00:12:41.381137 1922 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:12:41.384248 kubelet[1922]: I0906 00:12:41.384228 1922 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:12:41.389489 kubelet[1922]: I0906 00:12:41.389445 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:12:41.392482 kubelet[1922]: I0906 00:12:41.392452 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:12:41.392482 kubelet[1922]: I0906 00:12:41.392484 1922 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:12:41.392590 kubelet[1922]: I0906 00:12:41.392513 1922 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:12:41.392637 kubelet[1922]: E0906 00:12:41.392606 1922 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:12:41.415087 kubelet[1922]: I0906 00:12:41.415045 1922 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:12:41.415087 kubelet[1922]: I0906 00:12:41.415064 1922 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:12:41.415087 kubelet[1922]: I0906 00:12:41.415084 1922 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:12:41.415298 kubelet[1922]: I0906 00:12:41.415212 1922 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:12:41.415298 kubelet[1922]: I0906 00:12:41.415223 1922 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:12:41.415298 kubelet[1922]: I0906 00:12:41.415247 1922 policy_none.go:49] "None policy: Start" Sep 6 00:12:41.415734 kubelet[1922]: I0906 00:12:41.415712 1922 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:12:41.415793 kubelet[1922]: I0906 00:12:41.415744 1922 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:12:41.415943 kubelet[1922]: I0906 00:12:41.415921 1922 state_mem.go:75] "Updated machine memory state" Sep 6 00:12:41.419522 kubelet[1922]: I0906 00:12:41.419494 1922 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:12:41.419776 kubelet[1922]: I0906 00:12:41.419758 1922 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:12:41.420295 kubelet[1922]: I0906 00:12:41.419773 1922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:12:41.420295 kubelet[1922]: I0906 00:12:41.420053 1922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:12:41.530099 kubelet[1922]: I0906 00:12:41.529970 1922 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:12:41.581854 kubelet[1922]: I0906 00:12:41.581778 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:41.581854 kubelet[1922]: I0906 00:12:41.581845 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:41.582098 kubelet[1922]: I0906 00:12:41.581879 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:12:41.582098 kubelet[1922]: I0906 00:12:41.581913 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:41.582098 kubelet[1922]: I0906 00:12:41.581936 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:41.582098 kubelet[1922]: I0906 00:12:41.581955 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:41.582098 kubelet[1922]: I0906 00:12:41.581974 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:41.582335 kubelet[1922]: I0906 00:12:41.582191 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a36c0195b2867418aa50a7dcd99b3b76-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a36c0195b2867418aa50a7dcd99b3b76\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:41.582447 kubelet[1922]: I0906 00:12:41.582404 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:12:41.658704 kubelet[1922]: I0906 00:12:41.658664 1922 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 6 00:12:41.658918 kubelet[1922]: I0906 00:12:41.658761 1922 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:12:41.923221 sudo[1957]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:12:41.923453 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:12:41.961434 kubelet[1922]: E0906 00:12:41.961392 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:41.964483 kubelet[1922]: E0906 00:12:41.964457 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:41.964648 kubelet[1922]: E0906 00:12:41.964584 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:42.368214 kubelet[1922]: I0906 00:12:42.368067 1922 apiserver.go:52] "Watching apiserver" Sep 6 00:12:42.380731 kubelet[1922]: I0906 00:12:42.380704 1922 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:12:42.403028 kubelet[1922]: E0906 00:12:42.402683 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:42.403028 kubelet[1922]: E0906 00:12:42.402959 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:42.542207 sudo[1957]: pam_unix(sudo:session): session closed for user root Sep 6 00:12:42.617111 kubelet[1922]: I0906 00:12:42.616827 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.61679076 podStartE2EDuration="1.61679076s" podCreationTimestamp="2025-09-06 00:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:12:42.615153896 +0000 UTC m=+1.306533787" watchObservedRunningTime="2025-09-06 00:12:42.61679076 +0000 UTC m=+1.308170651" Sep 6 00:12:42.617370 kubelet[1922]: E0906 00:12:42.617184 1922 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:12:42.617370 kubelet[1922]: E0906 00:12:42.617337 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:42.658329 kubelet[1922]: I0906 00:12:42.658265 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.658226534 podStartE2EDuration="1.658226534s" podCreationTimestamp="2025-09-06 00:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:12:42.647468351 +0000 UTC m=+1.338848262" watchObservedRunningTime="2025-09-06 00:12:42.658226534 +0000 UTC m=+1.349606425" Sep 6 00:12:42.658582 kubelet[1922]: I0906 00:12:42.658435 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.658430551 podStartE2EDuration="1.658430551s" podCreationTimestamp="2025-09-06 00:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:12:42.658383251 +0000 UTC m=+1.349763142" watchObservedRunningTime="2025-09-06 00:12:42.658430551 +0000 UTC m=+1.349810442" Sep 6 00:12:43.403753 kubelet[1922]: E0906 00:12:43.403704 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:44.326456 sudo[1295]: pam_unix(sudo:session): session closed for user root Sep 6 00:12:44.327932 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 6 00:12:44.330686 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:60620.service: Deactivated successfully. Sep 6 00:12:44.331556 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:12:44.331706 systemd[1]: session-5.scope: Consumed 4.143s CPU time. Sep 6 00:12:44.332192 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:12:44.332923 systemd-logind[1183]: Removed session 5. Sep 6 00:12:45.622127 kubelet[1922]: E0906 00:12:45.622060 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:45.794112 kubelet[1922]: E0906 00:12:45.794073 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:46.061877 kubelet[1922]: I0906 00:12:46.061839 1922 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:12:46.062333 env[1197]: time="2025-09-06T00:12:46.062284305Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:12:46.062597 kubelet[1922]: I0906 00:12:46.062570 1922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:12:47.080868 systemd[1]: Created slice kubepods-besteffort-pod5da205f1_a869_49d0_97c2_684d320c17c0.slice. Sep 6 00:12:47.091717 systemd[1]: Created slice kubepods-burstable-pod859a35b3_2b01_4a02_9dcb_98985e57e044.slice. Sep 6 00:12:47.195646 systemd[1]: Created slice kubepods-besteffort-pod616d4cbb_c4e1_4687_b95c_af387fb37bc2.slice. Sep 6 00:12:47.216935 kubelet[1922]: I0906 00:12:47.216864 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cni-path\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.216935 kubelet[1922]: I0906 00:12:47.216922 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-kernel\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217021 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-bpf-maps\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217081 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-cgroup\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217108 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-config-path\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217132 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-hubble-tls\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217156 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-lib-modules\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217504 kubelet[1922]: I0906 00:12:47.217177 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-xtables-lock\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217649 kubelet[1922]: I0906 00:12:47.217205 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/859a35b3-2b01-4a02-9dcb-98985e57e044-clustermesh-secrets\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217649 kubelet[1922]: I0906 00:12:47.217227 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj8k8\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-kube-api-access-wj8k8\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217649 kubelet[1922]: I0906 00:12:47.217252 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5da205f1-a869-49d0-97c2-684d320c17c0-kube-proxy\") pod \"kube-proxy-nj6p8\" (UID: \"5da205f1-a869-49d0-97c2-684d320c17c0\") " pod="kube-system/kube-proxy-nj6p8" Sep 6 00:12:47.217649 kubelet[1922]: I0906 00:12:47.217274 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5da205f1-a869-49d0-97c2-684d320c17c0-lib-modules\") pod \"kube-proxy-nj6p8\" (UID: \"5da205f1-a869-49d0-97c2-684d320c17c0\") " pod="kube-system/kube-proxy-nj6p8" Sep 6 00:12:47.217649 kubelet[1922]: I0906 00:12:47.217297 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5da205f1-a869-49d0-97c2-684d320c17c0-xtables-lock\") pod \"kube-proxy-nj6p8\" (UID: \"5da205f1-a869-49d0-97c2-684d320c17c0\") " pod="kube-system/kube-proxy-nj6p8" Sep 6 00:12:47.217848 kubelet[1922]: I0906 00:12:47.217317 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-run\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217848 kubelet[1922]: I0906 00:12:47.217335 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-hostproc\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217848 kubelet[1922]: I0906 00:12:47.217354 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-etc-cni-netd\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.217848 kubelet[1922]: I0906 00:12:47.217378 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q966d\" (UniqueName: \"kubernetes.io/projected/5da205f1-a869-49d0-97c2-684d320c17c0-kube-api-access-q966d\") pod \"kube-proxy-nj6p8\" (UID: \"5da205f1-a869-49d0-97c2-684d320c17c0\") " pod="kube-system/kube-proxy-nj6p8" Sep 6 00:12:47.217848 kubelet[1922]: I0906 00:12:47.217399 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-net\") pod \"cilium-kwxtr\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " pod="kube-system/cilium-kwxtr" Sep 6 00:12:47.318034 kubelet[1922]: I0906 00:12:47.317943 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/616d4cbb-c4e1-4687-b95c-af387fb37bc2-cilium-config-path\") pod \"cilium-operator-5d85765b45-ztkg9\" (UID: \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\") " pod="kube-system/cilium-operator-5d85765b45-ztkg9" Sep 6 00:12:47.318278 kubelet[1922]: I0906 00:12:47.318131 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gjv\" (UniqueName: \"kubernetes.io/projected/616d4cbb-c4e1-4687-b95c-af387fb37bc2-kube-api-access-44gjv\") pod \"cilium-operator-5d85765b45-ztkg9\" (UID: \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\") " pod="kube-system/cilium-operator-5d85765b45-ztkg9" Sep 6 00:12:47.318747 kubelet[1922]: I0906 00:12:47.318647 1922 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:12:47.388657 kubelet[1922]: E0906 00:12:47.388483 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:47.389549 env[1197]: time="2025-09-06T00:12:47.389412889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj6p8,Uid:5da205f1-a869-49d0-97c2-684d320c17c0,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:47.394250 kubelet[1922]: E0906 00:12:47.394192 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:47.394622 env[1197]: time="2025-09-06T00:12:47.394591320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwxtr,Uid:859a35b3-2b01-4a02-9dcb-98985e57e044,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:48.098882 kubelet[1922]: E0906 00:12:48.098826 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:48.099423 env[1197]: time="2025-09-06T00:12:48.099371008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ztkg9,Uid:616d4cbb-c4e1-4687-b95c-af387fb37bc2,Namespace:kube-system,Attempt:0,}" Sep 6 00:12:48.546709 env[1197]: time="2025-09-06T00:12:48.539741245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:48.546709 env[1197]: time="2025-09-06T00:12:48.539986338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:48.546709 env[1197]: time="2025-09-06T00:12:48.540078001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:48.546709 env[1197]: time="2025-09-06T00:12:48.540220370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e pid=2018 runtime=io.containerd.runc.v2 Sep 6 00:12:48.553663 env[1197]: time="2025-09-06T00:12:48.553579966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:48.553900 env[1197]: time="2025-09-06T00:12:48.553680356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:48.553900 env[1197]: time="2025-09-06T00:12:48.553717476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:48.553987 env[1197]: time="2025-09-06T00:12:48.553918897Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa pid=2038 runtime=io.containerd.runc.v2 Sep 6 00:12:48.564923 systemd[1]: Started cri-containerd-ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e.scope. Sep 6 00:12:48.576473 systemd[1]: Started cri-containerd-e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa.scope. Sep 6 00:12:48.605489 env[1197]: time="2025-09-06T00:12:48.605399851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:12:48.605790 env[1197]: time="2025-09-06T00:12:48.605736367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:12:48.605936 env[1197]: time="2025-09-06T00:12:48.605906439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:12:48.606341 env[1197]: time="2025-09-06T00:12:48.606305081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d pid=2074 runtime=io.containerd.runc.v2 Sep 6 00:12:48.628855 systemd[1]: Started cri-containerd-e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d.scope. Sep 6 00:12:48.636151 env[1197]: time="2025-09-06T00:12:48.636104764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kwxtr,Uid:859a35b3-2b01-4a02-9dcb-98985e57e044,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\"" Sep 6 00:12:48.637400 kubelet[1922]: E0906 00:12:48.637363 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:48.640357 env[1197]: time="2025-09-06T00:12:48.640317866Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:12:48.648419 env[1197]: time="2025-09-06T00:12:48.648358211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj6p8,Uid:5da205f1-a869-49d0-97c2-684d320c17c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e\"" Sep 6 00:12:48.649161 kubelet[1922]: E0906 00:12:48.649124 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:48.651070 env[1197]: time="2025-09-06T00:12:48.651024118Z" level=info msg="CreateContainer within sandbox \"ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:12:48.674956 env[1197]: time="2025-09-06T00:12:48.674889448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ztkg9,Uid:616d4cbb-c4e1-4687-b95c-af387fb37bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\"" Sep 6 00:12:48.675793 kubelet[1922]: E0906 00:12:48.675768 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:48.732467 env[1197]: time="2025-09-06T00:12:48.732399775Z" level=info msg="CreateContainer within sandbox \"ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7f6846a6393f589ad75d15f1676650bdb81b05551050d95fc8bd7ea54a0a34f\"" Sep 6 00:12:48.732923 env[1197]: time="2025-09-06T00:12:48.732882418Z" level=info msg="StartContainer for \"d7f6846a6393f589ad75d15f1676650bdb81b05551050d95fc8bd7ea54a0a34f\"" Sep 6 00:12:48.747029 systemd[1]: Started cri-containerd-d7f6846a6393f589ad75d15f1676650bdb81b05551050d95fc8bd7ea54a0a34f.scope. Sep 6 00:12:48.773498 env[1197]: time="2025-09-06T00:12:48.773453115Z" level=info msg="StartContainer for \"d7f6846a6393f589ad75d15f1676650bdb81b05551050d95fc8bd7ea54a0a34f\" returns successfully" Sep 6 00:12:49.417544 kubelet[1922]: E0906 00:12:49.417503 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:49.427937 kubelet[1922]: I0906 00:12:49.427839 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nj6p8" podStartSLOduration=2.42781261 podStartE2EDuration="2.42781261s" podCreationTimestamp="2025-09-06 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:12:49.427550374 +0000 UTC m=+8.118930265" watchObservedRunningTime="2025-09-06 00:12:49.42781261 +0000 UTC m=+8.119192511" Sep 6 00:12:49.537561 systemd[1]: run-containerd-runc-k8s.io-ed0427d0df4cf5f2d0197690452aa91448ec50956842d3b8eafa2dd76e83a84e-runc.J3ZUI5.mount: Deactivated successfully. Sep 6 00:12:51.209592 kubelet[1922]: E0906 00:12:51.209474 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:51.421909 kubelet[1922]: E0906 00:12:51.421864 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:54.926873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799673956.mount: Deactivated successfully. Sep 6 00:12:55.627487 kubelet[1922]: E0906 00:12:55.627428 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:12:55.799488 kubelet[1922]: E0906 00:12:55.799440 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:00.397216 env[1197]: time="2025-09-06T00:13:00.397153365Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:00.399246 env[1197]: time="2025-09-06T00:13:00.399183646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:00.400805 env[1197]: time="2025-09-06T00:13:00.400762307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:00.401333 env[1197]: time="2025-09-06T00:13:00.401282205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:13:00.402353 env[1197]: time="2025-09-06T00:13:00.402301363Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:13:00.404143 env[1197]: time="2025-09-06T00:13:00.404104617Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:13:00.819101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189094105.mount: Deactivated successfully. Sep 6 00:13:01.209732 env[1197]: time="2025-09-06T00:13:01.209646413Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\"" Sep 6 00:13:01.210365 env[1197]: time="2025-09-06T00:13:01.210303339Z" level=info msg="StartContainer for \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\"" Sep 6 00:13:01.230010 systemd[1]: Started cri-containerd-4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a.scope. Sep 6 00:13:01.264910 env[1197]: time="2025-09-06T00:13:01.264856644Z" level=info msg="StartContainer for \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\" returns successfully" Sep 6 00:13:01.274566 systemd[1]: cri-containerd-4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a.scope: Deactivated successfully. Sep 6 00:13:01.442691 kubelet[1922]: E0906 00:13:01.442650 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:01.815516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a-rootfs.mount: Deactivated successfully. Sep 6 00:13:02.303015 env[1197]: time="2025-09-06T00:13:02.302920593Z" level=info msg="shim disconnected" id=4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a Sep 6 00:13:02.303015 env[1197]: time="2025-09-06T00:13:02.303013598Z" level=warning msg="cleaning up after shim disconnected" id=4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a namespace=k8s.io Sep 6 00:13:02.303504 env[1197]: time="2025-09-06T00:13:02.303030470Z" level=info msg="cleaning up dead shim" Sep 6 00:13:02.311064 env[1197]: time="2025-09-06T00:13:02.310975255Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:13:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Sep 6 00:13:02.446566 kubelet[1922]: E0906 00:13:02.446510 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:02.450232 env[1197]: time="2025-09-06T00:13:02.450192798Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:13:02.471981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917988066.mount: Deactivated successfully. Sep 6 00:13:02.480148 env[1197]: time="2025-09-06T00:13:02.480089909Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\"" Sep 6 00:13:02.480761 env[1197]: time="2025-09-06T00:13:02.480720877Z" level=info msg="StartContainer for \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\"" Sep 6 00:13:02.498061 systemd[1]: Started cri-containerd-f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735.scope. Sep 6 00:13:02.523342 env[1197]: time="2025-09-06T00:13:02.523291800Z" level=info msg="StartContainer for \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\" returns successfully" Sep 6 00:13:02.534080 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:13:02.534385 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:13:02.534593 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:13:02.536557 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:13:02.538273 systemd[1]: cri-containerd-f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735.scope: Deactivated successfully. Sep 6 00:13:02.545864 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:13:02.564866 env[1197]: time="2025-09-06T00:13:02.564732437Z" level=info msg="shim disconnected" id=f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735 Sep 6 00:13:02.564866 env[1197]: time="2025-09-06T00:13:02.564784114Z" level=warning msg="cleaning up after shim disconnected" id=f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735 namespace=k8s.io Sep 6 00:13:02.564866 env[1197]: time="2025-09-06T00:13:02.564793782Z" level=info msg="cleaning up dead shim" Sep 6 00:13:02.571752 env[1197]: time="2025-09-06T00:13:02.571675889Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:13:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2417 runtime=io.containerd.runc.v2\n" Sep 6 00:13:02.815515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735-rootfs.mount: Deactivated successfully. Sep 6 00:13:03.449493 kubelet[1922]: E0906 00:13:03.449446 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:03.452676 env[1197]: time="2025-09-06T00:13:03.452617352Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:13:03.822598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount387645554.mount: Deactivated successfully. Sep 6 00:13:03.843903 env[1197]: time="2025-09-06T00:13:03.843836672Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\"" Sep 6 00:13:03.844533 env[1197]: time="2025-09-06T00:13:03.844466526Z" level=info msg="StartContainer for \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\"" Sep 6 00:13:03.863031 systemd[1]: Started cri-containerd-40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b.scope. Sep 6 00:13:03.893722 systemd[1]: cri-containerd-40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b.scope: Deactivated successfully. Sep 6 00:13:03.896143 env[1197]: time="2025-09-06T00:13:03.896097694Z" level=info msg="StartContainer for \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\" returns successfully" Sep 6 00:13:03.933923 env[1197]: time="2025-09-06T00:13:03.933822388Z" level=info msg="shim disconnected" id=40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b Sep 6 00:13:03.933923 env[1197]: time="2025-09-06T00:13:03.933910273Z" level=warning msg="cleaning up after shim disconnected" id=40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b namespace=k8s.io Sep 6 00:13:03.933923 env[1197]: time="2025-09-06T00:13:03.933928087Z" level=info msg="cleaning up dead shim" Sep 6 00:13:03.941184 env[1197]: time="2025-09-06T00:13:03.941117810Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:13:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2474 runtime=io.containerd.runc.v2\n" Sep 6 00:13:04.454127 kubelet[1922]: E0906 00:13:04.454087 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:04.456034 env[1197]: time="2025-09-06T00:13:04.455974794Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:13:04.472595 env[1197]: time="2025-09-06T00:13:04.472536478Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\"" Sep 6 00:13:04.473096 env[1197]: time="2025-09-06T00:13:04.473063971Z" level=info msg="StartContainer for \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\"" Sep 6 00:13:04.489503 systemd[1]: Started cri-containerd-a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2.scope. Sep 6 00:13:04.518457 systemd[1]: cri-containerd-a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2.scope: Deactivated successfully. Sep 6 00:13:04.546765 env[1197]: time="2025-09-06T00:13:04.546692286Z" level=info msg="StartContainer for \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\" returns successfully" Sep 6 00:13:04.793056 env[1197]: time="2025-09-06T00:13:04.792903339Z" level=info msg="shim disconnected" id=a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2 Sep 6 00:13:04.793056 env[1197]: time="2025-09-06T00:13:04.792956700Z" level=warning msg="cleaning up after shim disconnected" id=a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2 namespace=k8s.io Sep 6 00:13:04.793056 env[1197]: time="2025-09-06T00:13:04.792966007Z" level=info msg="cleaning up dead shim" Sep 6 00:13:04.800209 env[1197]: time="2025-09-06T00:13:04.800164474Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:13:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n" Sep 6 00:13:04.819402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b-rootfs.mount: Deactivated successfully. Sep 6 00:13:04.957205 env[1197]: time="2025-09-06T00:13:04.957146994Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:04.961326 env[1197]: time="2025-09-06T00:13:04.961271892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:04.966154 env[1197]: time="2025-09-06T00:13:04.966097075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:04.966598 env[1197]: time="2025-09-06T00:13:04.966555748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:13:04.968965 env[1197]: time="2025-09-06T00:13:04.968925955Z" level=info msg="CreateContainer within sandbox \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:13:04.986609 env[1197]: time="2025-09-06T00:13:04.986562850Z" level=info msg="CreateContainer within sandbox \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\"" Sep 6 00:13:04.987398 env[1197]: time="2025-09-06T00:13:04.987320575Z" level=info msg="StartContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\"" Sep 6 00:13:05.010075 systemd[1]: Started cri-containerd-cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f.scope. Sep 6 00:13:05.042702 env[1197]: time="2025-09-06T00:13:05.042619290Z" level=info msg="StartContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" returns successfully" Sep 6 00:13:05.456789 kubelet[1922]: E0906 00:13:05.456732 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:05.459848 kubelet[1922]: E0906 00:13:05.459813 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:05.462607 env[1197]: time="2025-09-06T00:13:05.462537280Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:13:05.494651 env[1197]: time="2025-09-06T00:13:05.494575918Z" level=info msg="CreateContainer within sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\"" Sep 6 00:13:05.495977 kubelet[1922]: I0906 00:13:05.495915 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ztkg9" podStartSLOduration=2.204591501 podStartE2EDuration="18.495895057s" podCreationTimestamp="2025-09-06 00:12:47 +0000 UTC" firstStartedPulling="2025-09-06 00:12:48.676286387 +0000 UTC m=+7.367666278" lastFinishedPulling="2025-09-06 00:13:04.967589943 +0000 UTC m=+23.658969834" observedRunningTime="2025-09-06 00:13:05.472380757 +0000 UTC m=+24.163760648" watchObservedRunningTime="2025-09-06 00:13:05.495895057 +0000 UTC m=+24.187274948" Sep 6 00:13:05.496595 env[1197]: time="2025-09-06T00:13:05.496553195Z" level=info msg="StartContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\"" Sep 6 00:13:05.558645 systemd[1]: Started cri-containerd-904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277.scope. Sep 6 00:13:05.559876 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:33062.service. Sep 6 00:13:05.702203 env[1197]: time="2025-09-06T00:13:05.702097450Z" level=info msg="StartContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" returns successfully" Sep 6 00:13:05.813957 sshd[2602]: Accepted publickey for core from 10.0.0.1 port 33062 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:05.814737 sshd[2602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:05.819906 systemd[1]: run-containerd-runc-k8s.io-cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f-runc.wIqLWu.mount: Deactivated successfully. Sep 6 00:13:05.824546 systemd[1]: Started session-6.scope. Sep 6 00:13:05.824825 systemd-logind[1183]: New session 6 of user core. Sep 6 00:13:05.829146 kubelet[1922]: I0906 00:13:05.829101 1922 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:13:05.873678 systemd[1]: Created slice kubepods-burstable-pod9f7f49f2_0bc1_4d5e_987c_73759f27c801.slice. Sep 6 00:13:05.881440 systemd[1]: Created slice kubepods-burstable-pod000850b5_62a7_4f45_8bc2_a16b98dbf30f.slice. Sep 6 00:13:05.953224 kubelet[1922]: I0906 00:13:05.953163 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2kc\" (UniqueName: \"kubernetes.io/projected/9f7f49f2-0bc1-4d5e-987c-73759f27c801-kube-api-access-lg2kc\") pod \"coredns-7c65d6cfc9-4rxcq\" (UID: \"9f7f49f2-0bc1-4d5e-987c-73759f27c801\") " pod="kube-system/coredns-7c65d6cfc9-4rxcq" Sep 6 00:13:05.953224 kubelet[1922]: I0906 00:13:05.953227 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcpp\" (UniqueName: \"kubernetes.io/projected/000850b5-62a7-4f45-8bc2-a16b98dbf30f-kube-api-access-xkcpp\") pod \"coredns-7c65d6cfc9-7vzd9\" (UID: \"000850b5-62a7-4f45-8bc2-a16b98dbf30f\") " pod="kube-system/coredns-7c65d6cfc9-7vzd9" Sep 6 00:13:05.953490 kubelet[1922]: I0906 00:13:05.953255 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f7f49f2-0bc1-4d5e-987c-73759f27c801-config-volume\") pod \"coredns-7c65d6cfc9-4rxcq\" (UID: \"9f7f49f2-0bc1-4d5e-987c-73759f27c801\") " pod="kube-system/coredns-7c65d6cfc9-4rxcq" Sep 6 00:13:05.953490 kubelet[1922]: I0906 00:13:05.953284 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/000850b5-62a7-4f45-8bc2-a16b98dbf30f-config-volume\") pod \"coredns-7c65d6cfc9-7vzd9\" (UID: \"000850b5-62a7-4f45-8bc2-a16b98dbf30f\") " pod="kube-system/coredns-7c65d6cfc9-7vzd9" Sep 6 00:13:06.020315 sshd[2602]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:06.022737 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:33062.service: Deactivated successfully. Sep 6 00:13:06.024225 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:13:06.024958 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:13:06.025792 systemd-logind[1183]: Removed session 6. Sep 6 00:13:06.179314 kubelet[1922]: E0906 00:13:06.179263 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:06.181352 env[1197]: time="2025-09-06T00:13:06.181297134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rxcq,Uid:9f7f49f2-0bc1-4d5e-987c-73759f27c801,Namespace:kube-system,Attempt:0,}" Sep 6 00:13:06.185308 kubelet[1922]: E0906 00:13:06.185267 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:06.185925 env[1197]: time="2025-09-06T00:13:06.185886961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7vzd9,Uid:000850b5-62a7-4f45-8bc2-a16b98dbf30f,Namespace:kube-system,Attempt:0,}" Sep 6 00:13:06.466398 kubelet[1922]: E0906 00:13:06.466272 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:06.466398 kubelet[1922]: E0906 00:13:06.466370 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:06.541244 kubelet[1922]: I0906 00:13:06.541163 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kwxtr" podStartSLOduration=7.778806665 podStartE2EDuration="19.541141258s" podCreationTimestamp="2025-09-06 00:12:47 +0000 UTC" firstStartedPulling="2025-09-06 00:12:48.63978553 +0000 UTC m=+7.331165421" lastFinishedPulling="2025-09-06 00:13:00.402120113 +0000 UTC m=+19.093500014" observedRunningTime="2025-09-06 00:13:06.541074702 +0000 UTC m=+25.232454623" watchObservedRunningTime="2025-09-06 00:13:06.541141258 +0000 UTC m=+25.232521149" Sep 6 00:13:07.468792 kubelet[1922]: E0906 00:13:07.468750 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:08.469857 kubelet[1922]: E0906 00:13:08.469813 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:09.014520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:13:09.014670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:13:09.015356 systemd-networkd[1016]: cilium_host: Link UP Sep 6 00:13:09.015482 systemd-networkd[1016]: cilium_net: Link UP Sep 6 00:13:09.015614 systemd-networkd[1016]: cilium_net: Gained carrier Sep 6 00:13:09.015739 systemd-networkd[1016]: cilium_host: Gained carrier Sep 6 00:13:09.106538 systemd-networkd[1016]: cilium_vxlan: Link UP Sep 6 00:13:09.106553 systemd-networkd[1016]: cilium_vxlan: Gained carrier Sep 6 00:13:09.192212 systemd-networkd[1016]: cilium_net: Gained IPv6LL Sep 6 00:13:09.280202 systemd-networkd[1016]: cilium_host: Gained IPv6LL Sep 6 00:13:09.373037 kernel: NET: Registered PF_ALG protocol family Sep 6 00:13:10.012045 systemd-networkd[1016]: lxc_health: Link UP Sep 6 00:13:10.029774 systemd-networkd[1016]: lxc_health: Gained carrier Sep 6 00:13:10.030209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:13:10.110697 systemd-networkd[1016]: lxcb22bb09b6a29: Link UP Sep 6 00:13:10.142107 kernel: eth0: renamed from tmpcc9f4 Sep 6 00:13:10.152107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb22bb09b6a29: link becomes ready Sep 6 00:13:10.152114 systemd-networkd[1016]: lxcb22bb09b6a29: Gained carrier Sep 6 00:13:10.167518 systemd-networkd[1016]: lxc09958dd6405c: Link UP Sep 6 00:13:10.175305 kernel: eth0: renamed from tmp5ad38 Sep 6 00:13:10.188862 systemd-networkd[1016]: lxc09958dd6405c: Gained carrier Sep 6 00:13:10.189074 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc09958dd6405c: link becomes ready Sep 6 00:13:10.968232 systemd-networkd[1016]: cilium_vxlan: Gained IPv6LL Sep 6 00:13:11.024422 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:53808.service. Sep 6 00:13:11.068212 sshd[3135]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:11.069698 sshd[3135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:11.075615 systemd[1]: Started session-7.scope. Sep 6 00:13:11.077082 systemd-logind[1183]: New session 7 of user core. Sep 6 00:13:11.202617 sshd[3135]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:11.204911 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:53808.service: Deactivated successfully. Sep 6 00:13:11.205633 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:13:11.206384 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:13:11.207235 systemd-logind[1183]: Removed session 7. Sep 6 00:13:11.402189 kubelet[1922]: E0906 00:13:11.402167 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:11.478474 kubelet[1922]: E0906 00:13:11.477657 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:11.682209 systemd-networkd[1016]: lxc_health: Gained IPv6LL Sep 6 00:13:11.800173 systemd-networkd[1016]: lxc09958dd6405c: Gained IPv6LL Sep 6 00:13:11.864224 systemd-networkd[1016]: lxcb22bb09b6a29: Gained IPv6LL Sep 6 00:13:13.671773 env[1197]: time="2025-09-06T00:13:13.671660494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:13:13.671773 env[1197]: time="2025-09-06T00:13:13.671724874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:13:13.671773 env[1197]: time="2025-09-06T00:13:13.671736186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:13:13.672440 env[1197]: time="2025-09-06T00:13:13.671915132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ad38cf9e0aae4a2376a03bfeea45d467719f1a1b1189f1f6a500c140f75d4f4 pid=3170 runtime=io.containerd.runc.v2 Sep 6 00:13:13.687132 systemd[1]: Started cri-containerd-5ad38cf9e0aae4a2376a03bfeea45d467719f1a1b1189f1f6a500c140f75d4f4.scope. Sep 6 00:13:13.698576 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:13:13.718409 env[1197]: time="2025-09-06T00:13:13.718309150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:13:13.718409 env[1197]: time="2025-09-06T00:13:13.718355287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:13:13.718409 env[1197]: time="2025-09-06T00:13:13.718367840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:13:13.718644 env[1197]: time="2025-09-06T00:13:13.718520757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc9f4b2fbc8b366c9c9a9c8c8d0da9317c45417731436a97f045ac847df65b04 pid=3204 runtime=io.containerd.runc.v2 Sep 6 00:13:13.731139 env[1197]: time="2025-09-06T00:13:13.731087220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7vzd9,Uid:000850b5-62a7-4f45-8bc2-a16b98dbf30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ad38cf9e0aae4a2376a03bfeea45d467719f1a1b1189f1f6a500c140f75d4f4\"" Sep 6 00:13:13.732204 kubelet[1922]: E0906 00:13:13.732096 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:13.733755 systemd[1]: Started cri-containerd-cc9f4b2fbc8b366c9c9a9c8c8d0da9317c45417731436a97f045ac847df65b04.scope. Sep 6 00:13:13.737024 env[1197]: time="2025-09-06T00:13:13.736862167Z" level=info msg="CreateContainer within sandbox \"5ad38cf9e0aae4a2376a03bfeea45d467719f1a1b1189f1f6a500c140f75d4f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:13:13.751370 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:13:13.760399 env[1197]: time="2025-09-06T00:13:13.760356464Z" level=info msg="CreateContainer within sandbox \"5ad38cf9e0aae4a2376a03bfeea45d467719f1a1b1189f1f6a500c140f75d4f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b53d5ab56d837f8825407dc87f05fbf62851729aab0e08b947075655c713d02\"" Sep 6 00:13:13.762722 env[1197]: time="2025-09-06T00:13:13.762698262Z" level=info msg="StartContainer for \"1b53d5ab56d837f8825407dc87f05fbf62851729aab0e08b947075655c713d02\"" Sep 6 00:13:13.778702 systemd[1]: Started cri-containerd-1b53d5ab56d837f8825407dc87f05fbf62851729aab0e08b947075655c713d02.scope. Sep 6 00:13:13.779801 env[1197]: time="2025-09-06T00:13:13.779756461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rxcq,Uid:9f7f49f2-0bc1-4d5e-987c-73759f27c801,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc9f4b2fbc8b366c9c9a9c8c8d0da9317c45417731436a97f045ac847df65b04\"" Sep 6 00:13:13.780519 kubelet[1922]: E0906 00:13:13.780486 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:13.788243 env[1197]: time="2025-09-06T00:13:13.788170794Z" level=info msg="CreateContainer within sandbox \"cc9f4b2fbc8b366c9c9a9c8c8d0da9317c45417731436a97f045ac847df65b04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:13:13.809071 env[1197]: time="2025-09-06T00:13:13.808966714Z" level=info msg="StartContainer for \"1b53d5ab56d837f8825407dc87f05fbf62851729aab0e08b947075655c713d02\" returns successfully" Sep 6 00:13:13.819578 env[1197]: time="2025-09-06T00:13:13.819218619Z" level=info msg="CreateContainer within sandbox \"cc9f4b2fbc8b366c9c9a9c8c8d0da9317c45417731436a97f045ac847df65b04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e136d748865e8707e8c3aa7e1522a24f110ddd337019f5f3ba86c384b1b0b5b8\"" Sep 6 00:13:13.820166 env[1197]: time="2025-09-06T00:13:13.820125914Z" level=info msg="StartContainer for \"e136d748865e8707e8c3aa7e1522a24f110ddd337019f5f3ba86c384b1b0b5b8\"" Sep 6 00:13:13.841695 systemd[1]: Started cri-containerd-e136d748865e8707e8c3aa7e1522a24f110ddd337019f5f3ba86c384b1b0b5b8.scope. Sep 6 00:13:13.875831 env[1197]: time="2025-09-06T00:13:13.875764945Z" level=info msg="StartContainer for \"e136d748865e8707e8c3aa7e1522a24f110ddd337019f5f3ba86c384b1b0b5b8\" returns successfully" Sep 6 00:13:14.484122 kubelet[1922]: E0906 00:13:14.484081 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:14.486195 kubelet[1922]: E0906 00:13:14.486151 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:14.678428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367955824.mount: Deactivated successfully. Sep 6 00:13:14.904503 kubelet[1922]: I0906 00:13:14.904406 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4rxcq" podStartSLOduration=27.904382773000002 podStartE2EDuration="27.904382773s" podCreationTimestamp="2025-09-06 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:13:14.760543137 +0000 UTC m=+33.451923028" watchObservedRunningTime="2025-09-06 00:13:14.904382773 +0000 UTC m=+33.595762664" Sep 6 00:13:14.918341 kubelet[1922]: I0906 00:13:14.918267 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7vzd9" podStartSLOduration=27.918245017 podStartE2EDuration="27.918245017s" podCreationTimestamp="2025-09-06 00:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:13:14.917633528 +0000 UTC m=+33.609013420" watchObservedRunningTime="2025-09-06 00:13:14.918245017 +0000 UTC m=+33.609624898" Sep 6 00:13:15.488161 kubelet[1922]: E0906 00:13:15.488108 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:15.488349 kubelet[1922]: E0906 00:13:15.488108 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:16.205390 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:53816.service. Sep 6 00:13:16.240117 sshd[3327]: Accepted publickey for core from 10.0.0.1 port 53816 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:16.241370 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:16.244861 systemd-logind[1183]: New session 8 of user core. Sep 6 00:13:16.245627 systemd[1]: Started session-8.scope. Sep 6 00:13:16.356807 sshd[3327]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:16.359735 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:53816.service: Deactivated successfully. Sep 6 00:13:16.360642 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:13:16.361229 systemd-logind[1183]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:13:16.361937 systemd-logind[1183]: Removed session 8. Sep 6 00:13:16.490147 kubelet[1922]: E0906 00:13:16.489974 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:16.490147 kubelet[1922]: E0906 00:13:16.490120 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:21.360933 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:48618.service. Sep 6 00:13:21.395869 sshd[3345]: Accepted publickey for core from 10.0.0.1 port 48618 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:21.397124 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:21.400712 systemd-logind[1183]: New session 9 of user core. Sep 6 00:13:21.401730 systemd[1]: Started session-9.scope. Sep 6 00:13:21.520134 sshd[3345]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:21.522263 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:48618.service: Deactivated successfully. Sep 6 00:13:21.522964 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:13:21.523532 systemd-logind[1183]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:13:21.524254 systemd-logind[1183]: Removed session 9. Sep 6 00:13:26.524968 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:48634.service. Sep 6 00:13:26.561163 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:26.562303 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:26.565829 systemd-logind[1183]: New session 10 of user core. Sep 6 00:13:26.566710 systemd[1]: Started session-10.scope. Sep 6 00:13:26.688708 sshd[3360]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:26.691603 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:48634.service: Deactivated successfully. Sep 6 00:13:26.692226 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:13:26.692746 systemd-logind[1183]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:13:26.693987 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:48640.service. Sep 6 00:13:26.694987 systemd-logind[1183]: Removed session 10. Sep 6 00:13:26.728872 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 48640 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:26.730069 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:26.733703 systemd-logind[1183]: New session 11 of user core. Sep 6 00:13:26.734546 systemd[1]: Started session-11.scope. Sep 6 00:13:26.893813 sshd[3374]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:26.898167 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:48646.service. Sep 6 00:13:26.898636 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:48640.service: Deactivated successfully. Sep 6 00:13:26.900371 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:13:26.901203 systemd-logind[1183]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:13:26.902276 systemd-logind[1183]: Removed session 11. Sep 6 00:13:26.934632 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 48646 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:26.935802 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:26.939180 systemd-logind[1183]: New session 12 of user core. Sep 6 00:13:26.939986 systemd[1]: Started session-12.scope. Sep 6 00:13:27.050666 sshd[3384]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:27.053330 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:48646.service: Deactivated successfully. Sep 6 00:13:27.054144 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:13:27.054676 systemd-logind[1183]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:13:27.055420 systemd-logind[1183]: Removed session 12. Sep 6 00:13:32.056457 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:54142.service. Sep 6 00:13:32.090971 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 54142 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:32.092627 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:32.097264 systemd-logind[1183]: New session 13 of user core. Sep 6 00:13:32.098372 systemd[1]: Started session-13.scope. Sep 6 00:13:32.206946 sshd[3399]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:32.209652 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:54142.service: Deactivated successfully. Sep 6 00:13:32.210617 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:13:32.211224 systemd-logind[1183]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:13:32.211872 systemd-logind[1183]: Removed session 13. Sep 6 00:13:37.212296 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:54156.service. Sep 6 00:13:37.245198 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 54156 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:37.246393 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:37.249850 systemd-logind[1183]: New session 14 of user core. Sep 6 00:13:37.250930 systemd[1]: Started session-14.scope. Sep 6 00:13:37.357336 sshd[3412]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:37.359351 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:54156.service: Deactivated successfully. Sep 6 00:13:37.360023 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:13:37.360427 systemd-logind[1183]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:13:37.361115 systemd-logind[1183]: Removed session 14. Sep 6 00:13:42.363361 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:37618.service. Sep 6 00:13:42.400398 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 37618 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:42.401805 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:42.406102 systemd-logind[1183]: New session 15 of user core. Sep 6 00:13:42.406948 systemd[1]: Started session-15.scope. Sep 6 00:13:42.524285 sshd[3429]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:42.528079 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:37618.service: Deactivated successfully. Sep 6 00:13:42.528766 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:13:42.529366 systemd-logind[1183]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:13:42.530753 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:37622.service. Sep 6 00:13:42.531701 systemd-logind[1183]: Removed session 15. Sep 6 00:13:42.568282 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 37622 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:42.569517 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:42.573540 systemd-logind[1183]: New session 16 of user core. Sep 6 00:13:42.574422 systemd[1]: Started session-16.scope. Sep 6 00:13:43.957262 sshd[3442]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:43.960007 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:37622.service: Deactivated successfully. Sep 6 00:13:43.960566 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:13:43.961152 systemd-logind[1183]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:13:43.962306 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:37632.service. Sep 6 00:13:43.963260 systemd-logind[1183]: Removed session 16. Sep 6 00:13:43.995305 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 37632 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:43.996619 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:43.999706 systemd-logind[1183]: New session 17 of user core. Sep 6 00:13:44.000561 systemd[1]: Started session-17.scope. Sep 6 00:13:46.173090 sshd[3454]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:46.175585 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:37632.service: Deactivated successfully. Sep 6 00:13:46.176183 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:13:46.176919 systemd-logind[1183]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:13:46.177895 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:37646.service. Sep 6 00:13:46.178828 systemd-logind[1183]: Removed session 17. Sep 6 00:13:46.215438 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 37646 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:46.217280 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:46.220728 systemd-logind[1183]: New session 18 of user core. Sep 6 00:13:46.221600 systemd[1]: Started session-18.scope. Sep 6 00:13:46.862093 sshd[3477]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:46.866022 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:37652.service. Sep 6 00:13:46.866682 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:37646.service: Deactivated successfully. Sep 6 00:13:46.867271 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:13:46.868345 systemd-logind[1183]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:13:46.869214 systemd-logind[1183]: Removed session 18. Sep 6 00:13:46.900020 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:46.901184 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:46.904498 systemd-logind[1183]: New session 19 of user core. Sep 6 00:13:46.905302 systemd[1]: Started session-19.scope. Sep 6 00:13:47.007387 sshd[3488]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:47.010117 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:37652.service: Deactivated successfully. Sep 6 00:13:47.010856 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:13:47.011457 systemd-logind[1183]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:13:47.012294 systemd-logind[1183]: Removed session 19. Sep 6 00:13:52.013603 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:38440.service. Sep 6 00:13:52.047569 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 38440 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:52.048878 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:52.052804 systemd-logind[1183]: New session 20 of user core. Sep 6 00:13:52.053854 systemd[1]: Started session-20.scope. Sep 6 00:13:52.167834 sshd[3505]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:52.170759 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:38440.service: Deactivated successfully. Sep 6 00:13:52.171671 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:13:52.172559 systemd-logind[1183]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:13:52.173412 systemd-logind[1183]: Removed session 20. Sep 6 00:13:54.393669 kubelet[1922]: E0906 00:13:54.393602 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:57.172169 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:38454.service. Sep 6 00:13:57.205940 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 38454 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:13:57.207169 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:57.210909 systemd-logind[1183]: New session 21 of user core. Sep 6 00:13:57.212071 systemd[1]: Started session-21.scope. Sep 6 00:13:57.318894 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:57.322813 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:38454.service: Deactivated successfully. Sep 6 00:13:57.323820 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:13:57.325015 systemd-logind[1183]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:13:57.326518 systemd-logind[1183]: Removed session 21. Sep 6 00:14:02.323957 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:48730.service. Sep 6 00:14:02.357492 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 48730 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:02.358756 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:02.362035 systemd-logind[1183]: New session 22 of user core. Sep 6 00:14:02.362755 systemd[1]: Started session-22.scope. Sep 6 00:14:02.393058 kubelet[1922]: E0906 00:14:02.393013 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:02.463092 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:02.465547 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:48730.service: Deactivated successfully. Sep 6 00:14:02.466260 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:14:02.466871 systemd-logind[1183]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:14:02.467560 systemd-logind[1183]: Removed session 22. Sep 6 00:14:07.468176 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:48732.service. Sep 6 00:14:07.501419 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 48732 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:07.502749 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:07.506404 systemd-logind[1183]: New session 23 of user core. Sep 6 00:14:07.507289 systemd[1]: Started session-23.scope. Sep 6 00:14:07.615084 sshd[3549]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:07.617930 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:48732.service: Deactivated successfully. Sep 6 00:14:07.618440 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:14:07.619032 systemd-logind[1183]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:14:07.620033 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:48742.service. Sep 6 00:14:07.621440 systemd-logind[1183]: Removed session 23. Sep 6 00:14:07.652816 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:07.654174 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:07.657977 systemd-logind[1183]: New session 24 of user core. Sep 6 00:14:07.659121 systemd[1]: Started session-24.scope. Sep 6 00:14:09.394114 kubelet[1922]: E0906 00:14:09.394067 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:09.872837 systemd[1]: run-containerd-runc-k8s.io-904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277-runc.IccwDl.mount: Deactivated successfully. Sep 6 00:14:09.897549 env[1197]: time="2025-09-06T00:14:09.897465721Z" level=info msg="StopContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" with timeout 30 (s)" Sep 6 00:14:09.898036 env[1197]: time="2025-09-06T00:14:09.897906489Z" level=info msg="Stop container \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" with signal terminated" Sep 6 00:14:09.906112 systemd[1]: cri-containerd-cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f.scope: Deactivated successfully. Sep 6 00:14:09.923949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f-rootfs.mount: Deactivated successfully. Sep 6 00:14:10.178804 env[1197]: time="2025-09-06T00:14:10.178707227Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:14:10.183913 env[1197]: time="2025-09-06T00:14:10.183880824Z" level=info msg="StopContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" with timeout 2 (s)" Sep 6 00:14:10.184103 env[1197]: time="2025-09-06T00:14:10.184083199Z" level=info msg="Stop container \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" with signal terminated" Sep 6 00:14:10.189390 systemd-networkd[1016]: lxc_health: Link DOWN Sep 6 00:14:10.189400 systemd-networkd[1016]: lxc_health: Lost carrier Sep 6 00:14:10.248421 systemd[1]: cri-containerd-904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277.scope: Deactivated successfully. Sep 6 00:14:10.248732 systemd[1]: cri-containerd-904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277.scope: Consumed 6.355s CPU time. Sep 6 00:14:10.261721 env[1197]: time="2025-09-06T00:14:10.261673886Z" level=info msg="shim disconnected" id=cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f Sep 6 00:14:10.261910 env[1197]: time="2025-09-06T00:14:10.261890256Z" level=warning msg="cleaning up after shim disconnected" id=cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f namespace=k8s.io Sep 6 00:14:10.262012 env[1197]: time="2025-09-06T00:14:10.261978223Z" level=info msg="cleaning up dead shim" Sep 6 00:14:10.266234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277-rootfs.mount: Deactivated successfully. Sep 6 00:14:10.270500 env[1197]: time="2025-09-06T00:14:10.270446569Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3631 runtime=io.containerd.runc.v2\n" Sep 6 00:14:10.687752 env[1197]: time="2025-09-06T00:14:10.687683987Z" level=info msg="StopContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" returns successfully" Sep 6 00:14:10.688335 env[1197]: time="2025-09-06T00:14:10.688286142Z" level=info msg="shim disconnected" id=904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277 Sep 6 00:14:10.688407 env[1197]: time="2025-09-06T00:14:10.688340775Z" level=warning msg="cleaning up after shim disconnected" id=904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277 namespace=k8s.io Sep 6 00:14:10.688407 env[1197]: time="2025-09-06T00:14:10.688352447Z" level=info msg="cleaning up dead shim" Sep 6 00:14:10.688467 env[1197]: time="2025-09-06T00:14:10.688426207Z" level=info msg="StopPodSandbox for \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\"" Sep 6 00:14:10.688522 env[1197]: time="2025-09-06T00:14:10.688493215Z" level=info msg="Container to stop \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:10.694852 systemd[1]: cri-containerd-e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d.scope: Deactivated successfully. Sep 6 00:14:10.696524 env[1197]: time="2025-09-06T00:14:10.696496207Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" Sep 6 00:14:10.868806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d-rootfs.mount: Deactivated successfully. Sep 6 00:14:10.868897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d-shm.mount: Deactivated successfully. Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.012965195Z" level=info msg="StopContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" returns successfully" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013514047Z" level=info msg="StopPodSandbox for \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\"" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013566667Z" level=info msg="Container to stop \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013580613Z" level=info msg="Container to stop \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013589220Z" level=info msg="Container to stop \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013598317Z" level=info msg="Container to stop \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:11.056250 env[1197]: time="2025-09-06T00:14:11.013608316Z" level=info msg="Container to stop \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:14:11.015546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa-shm.mount: Deactivated successfully. Sep 6 00:14:11.019424 systemd[1]: cri-containerd-e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa.scope: Deactivated successfully. Sep 6 00:14:11.036503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa-rootfs.mount: Deactivated successfully. Sep 6 00:14:11.111128 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:42372.service. Sep 6 00:14:11.266773 sshd[3562]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:11.270418 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:48742.service: Deactivated successfully. Sep 6 00:14:11.271373 systemd-logind[1183]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:14:11.271420 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:14:11.272671 systemd-logind[1183]: Removed session 24. Sep 6 00:14:11.293852 env[1197]: time="2025-09-06T00:14:11.293791166Z" level=info msg="shim disconnected" id=e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d Sep 6 00:14:11.293852 env[1197]: time="2025-09-06T00:14:11.293850198Z" level=warning msg="cleaning up after shim disconnected" id=e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d namespace=k8s.io Sep 6 00:14:11.293852 env[1197]: time="2025-09-06T00:14:11.293859676Z" level=info msg="cleaning up dead shim" Sep 6 00:14:11.319972 env[1197]: time="2025-09-06T00:14:11.300733269Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n" Sep 6 00:14:11.319972 env[1197]: time="2025-09-06T00:14:11.319926557Z" level=info msg="TearDown network for sandbox \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\" successfully" Sep 6 00:14:11.319972 env[1197]: time="2025-09-06T00:14:11.319960572Z" level=info msg="StopPodSandbox for \"e897a0a2216d65d804d0468c99a8e42b4ac8f776df1c26d96d761fc4a9fda27d\" returns successfully" Sep 6 00:14:11.320098 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 42372 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:11.320107 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:11.323740 systemd-logind[1183]: New session 25 of user core. Sep 6 00:14:11.325152 systemd[1]: Started session-25.scope. Sep 6 00:14:11.351243 env[1197]: time="2025-09-06T00:14:11.351160541Z" level=info msg="shim disconnected" id=e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa Sep 6 00:14:11.351243 env[1197]: time="2025-09-06T00:14:11.351220154Z" level=warning msg="cleaning up after shim disconnected" id=e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa namespace=k8s.io Sep 6 00:14:11.351243 env[1197]: time="2025-09-06T00:14:11.351231025Z" level=info msg="cleaning up dead shim" Sep 6 00:14:11.358137 env[1197]: time="2025-09-06T00:14:11.358080201Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Sep 6 00:14:11.358742 env[1197]: time="2025-09-06T00:14:11.358702774Z" level=info msg="TearDown network for sandbox \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" successfully" Sep 6 00:14:11.358810 env[1197]: time="2025-09-06T00:14:11.358741377Z" level=info msg="StopPodSandbox for \"e86ce5ace2acffc27580d43db3abc61e9e7fbc12b21fca26ae32691d29813efa\" returns successfully" Sep 6 00:14:11.452396 kubelet[1922]: E0906 00:14:11.452337 1922 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:14:11.469641 kubelet[1922]: I0906 00:14:11.469597 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-etc-cni-netd\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.469641 kubelet[1922]: I0906 00:14:11.469631 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/616d4cbb-c4e1-4687-b95c-af387fb37bc2-cilium-config-path\") pod \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\" (UID: \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\") " Sep 6 00:14:11.469641 kubelet[1922]: I0906 00:14:11.469650 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-bpf-maps\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469693 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44gjv\" (UniqueName: \"kubernetes.io/projected/616d4cbb-c4e1-4687-b95c-af387fb37bc2-kube-api-access-44gjv\") pod \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\" (UID: \"616d4cbb-c4e1-4687-b95c-af387fb37bc2\") " Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469709 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cni-path\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469702 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469722 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-cgroup\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469806 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-lib-modules\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.469901 kubelet[1922]: I0906 00:14:11.469824 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-hostproc\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469848 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/859a35b3-2b01-4a02-9dcb-98985e57e044-clustermesh-secrets\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469861 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-xtables-lock\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469881 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-config-path\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469896 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj8k8\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-kube-api-access-wj8k8\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469910 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-kernel\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470069 kubelet[1922]: I0906 00:14:11.469924 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-hubble-tls\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470222 kubelet[1922]: I0906 00:14:11.469936 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-run\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470222 kubelet[1922]: I0906 00:14:11.469948 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-net\") pod \"859a35b3-2b01-4a02-9dcb-98985e57e044\" (UID: \"859a35b3-2b01-4a02-9dcb-98985e57e044\") " Sep 6 00:14:11.470222 kubelet[1922]: I0906 00:14:11.469974 1922 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.470222 kubelet[1922]: I0906 00:14:11.470012 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470222 kubelet[1922]: I0906 00:14:11.470029 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470344 kubelet[1922]: I0906 00:14:11.470042 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-hostproc" (OuterVolumeSpecName: "hostproc") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470344 kubelet[1922]: I0906 00:14:11.470100 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470344 kubelet[1922]: I0906 00:14:11.470128 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cni-path" (OuterVolumeSpecName: "cni-path") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470344 kubelet[1922]: I0906 00:14:11.470141 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470842 kubelet[1922]: I0906 00:14:11.470809 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.470924 kubelet[1922]: I0906 00:14:11.470809 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.471051 kubelet[1922]: I0906 00:14:11.471032 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:11.471905 kubelet[1922]: I0906 00:14:11.471853 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/616d4cbb-c4e1-4687-b95c-af387fb37bc2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "616d4cbb-c4e1-4687-b95c-af387fb37bc2" (UID: "616d4cbb-c4e1-4687-b95c-af387fb37bc2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:14:11.473363 kubelet[1922]: I0906 00:14:11.473341 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:14:11.473893 kubelet[1922]: I0906 00:14:11.473859 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:14:11.474422 kubelet[1922]: I0906 00:14:11.474391 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859a35b3-2b01-4a02-9dcb-98985e57e044-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:14:11.474982 systemd[1]: var-lib-kubelet-pods-616d4cbb\x2dc4e1\x2d4687\x2db95c\x2daf387fb37bc2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d44gjv.mount: Deactivated successfully. Sep 6 00:14:11.477790 kubelet[1922]: I0906 00:14:11.477652 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-kube-api-access-wj8k8" (OuterVolumeSpecName: "kube-api-access-wj8k8") pod "859a35b3-2b01-4a02-9dcb-98985e57e044" (UID: "859a35b3-2b01-4a02-9dcb-98985e57e044"). InnerVolumeSpecName "kube-api-access-wj8k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:14:11.475099 systemd[1]: var-lib-kubelet-pods-859a35b3\x2d2b01\x2d4a02\x2d9dcb\x2d98985e57e044-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:14:11.477155 systemd[1]: var-lib-kubelet-pods-859a35b3\x2d2b01\x2d4a02\x2d9dcb\x2d98985e57e044-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwj8k8.mount: Deactivated successfully. Sep 6 00:14:11.477234 systemd[1]: var-lib-kubelet-pods-859a35b3\x2d2b01\x2d4a02\x2d9dcb\x2d98985e57e044-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:14:11.478160 kubelet[1922]: I0906 00:14:11.478128 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/616d4cbb-c4e1-4687-b95c-af387fb37bc2-kube-api-access-44gjv" (OuterVolumeSpecName: "kube-api-access-44gjv") pod "616d4cbb-c4e1-4687-b95c-af387fb37bc2" (UID: "616d4cbb-c4e1-4687-b95c-af387fb37bc2"). InnerVolumeSpecName "kube-api-access-44gjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570231 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570271 1922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj8k8\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-kube-api-access-wj8k8\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570280 1922 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570290 1922 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/859a35b3-2b01-4a02-9dcb-98985e57e044-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570299 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570306 1922 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570314 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/616d4cbb-c4e1-4687-b95c-af387fb37bc2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570382 kubelet[1922]: I0906 00:14:11.570322 1922 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570342 1922 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570349 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570355 1922 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570363 1922 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570370 1922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44gjv\" (UniqueName: \"kubernetes.io/projected/616d4cbb-c4e1-4687-b95c-af387fb37bc2-kube-api-access-44gjv\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570377 1922 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/859a35b3-2b01-4a02-9dcb-98985e57e044-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.570876 kubelet[1922]: I0906 00:14:11.570384 1922 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859a35b3-2b01-4a02-9dcb-98985e57e044-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:11.693844 kubelet[1922]: I0906 00:14:11.693807 1922 scope.go:117] "RemoveContainer" containerID="cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f" Sep 6 00:14:11.695143 env[1197]: time="2025-09-06T00:14:11.695092350Z" level=info msg="RemoveContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\"" Sep 6 00:14:11.698870 systemd[1]: Removed slice kubepods-besteffort-pod616d4cbb_c4e1_4687_b95c_af387fb37bc2.slice. Sep 6 00:14:11.700374 systemd[1]: Removed slice kubepods-burstable-pod859a35b3_2b01_4a02_9dcb_98985e57e044.slice. Sep 6 00:14:11.700481 systemd[1]: kubepods-burstable-pod859a35b3_2b01_4a02_9dcb_98985e57e044.slice: Consumed 6.467s CPU time. Sep 6 00:14:12.033002 env[1197]: time="2025-09-06T00:14:12.032926176Z" level=info msg="RemoveContainer for \"cf4152c8b99a1f8780a6cbc973b790e9d088ca03096282d0aeca84e79e47825f\" returns successfully" Sep 6 00:14:12.033367 kubelet[1922]: I0906 00:14:12.033341 1922 scope.go:117] "RemoveContainer" containerID="904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277" Sep 6 00:14:12.034872 env[1197]: time="2025-09-06T00:14:12.034494704Z" level=info msg="RemoveContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\"" Sep 6 00:14:12.299925 env[1197]: time="2025-09-06T00:14:12.299774286Z" level=info msg="RemoveContainer for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" returns successfully" Sep 6 00:14:12.300284 kubelet[1922]: I0906 00:14:12.300119 1922 scope.go:117] "RemoveContainer" containerID="a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2" Sep 6 00:14:12.301430 env[1197]: time="2025-09-06T00:14:12.301392098Z" level=info msg="RemoveContainer for \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\"" Sep 6 00:14:12.351501 env[1197]: time="2025-09-06T00:14:12.351416879Z" level=info msg="RemoveContainer for \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\" returns successfully" Sep 6 00:14:12.351842 kubelet[1922]: I0906 00:14:12.351794 1922 scope.go:117] "RemoveContainer" containerID="40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b" Sep 6 00:14:12.353191 env[1197]: time="2025-09-06T00:14:12.353157625Z" level=info msg="RemoveContainer for \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\"" Sep 6 00:14:12.522958 env[1197]: time="2025-09-06T00:14:12.522881304Z" level=info msg="RemoveContainer for \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\" returns successfully" Sep 6 00:14:12.523270 kubelet[1922]: I0906 00:14:12.523221 1922 scope.go:117] "RemoveContainer" containerID="f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735" Sep 6 00:14:12.524594 env[1197]: time="2025-09-06T00:14:12.524546165Z" level=info msg="RemoveContainer for \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\"" Sep 6 00:14:12.712774 env[1197]: time="2025-09-06T00:14:12.712712860Z" level=info msg="RemoveContainer for \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\" returns successfully" Sep 6 00:14:12.713058 kubelet[1922]: I0906 00:14:12.713022 1922 scope.go:117] "RemoveContainer" containerID="4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a" Sep 6 00:14:12.714264 env[1197]: time="2025-09-06T00:14:12.714235039Z" level=info msg="RemoveContainer for \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\"" Sep 6 00:14:12.719949 env[1197]: time="2025-09-06T00:14:12.719888452Z" level=info msg="RemoveContainer for \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\" returns successfully" Sep 6 00:14:12.720339 kubelet[1922]: I0906 00:14:12.720302 1922 scope.go:117] "RemoveContainer" containerID="904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277" Sep 6 00:14:12.720735 env[1197]: time="2025-09-06T00:14:12.720635590Z" level=error msg="ContainerStatus for \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\": not found" Sep 6 00:14:12.721565 kubelet[1922]: E0906 00:14:12.720879 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\": not found" containerID="904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277" Sep 6 00:14:12.722571 kubelet[1922]: I0906 00:14:12.720916 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277"} err="failed to get container status \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\": rpc error: code = NotFound desc = an error occurred when try to find container \"904180020d3a3f388155b90a55e265b9f896979ea5e6019688959f7ca8c99277\": not found" Sep 6 00:14:12.722571 kubelet[1922]: I0906 00:14:12.722402 1922 scope.go:117] "RemoveContainer" containerID="a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2" Sep 6 00:14:12.722744 env[1197]: time="2025-09-06T00:14:12.722678790Z" level=error msg="ContainerStatus for \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\": not found" Sep 6 00:14:12.724950 kubelet[1922]: E0906 00:14:12.724900 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\": not found" containerID="a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2" Sep 6 00:14:12.724950 kubelet[1922]: I0906 00:14:12.724953 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2"} err="failed to get container status \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a624baae144064845e1abef61f1d164d603bd6616c08cde6a7d5dff42717e3e2\": not found" Sep 6 00:14:12.724950 kubelet[1922]: I0906 00:14:12.725010 1922 scope.go:117] "RemoveContainer" containerID="40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b" Sep 6 00:14:12.725731 env[1197]: time="2025-09-06T00:14:12.725604293Z" level=error msg="ContainerStatus for \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\": not found" Sep 6 00:14:12.725943 kubelet[1922]: E0906 00:14:12.725826 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\": not found" containerID="40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b" Sep 6 00:14:12.725943 kubelet[1922]: I0906 00:14:12.725859 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b"} err="failed to get container status \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\": rpc error: code = NotFound desc = an error occurred when try to find container \"40c27cbf8906c7eeef8ae6e8c47365e8b920b03f7fc1ddb6db1a1d4adf11422b\": not found" Sep 6 00:14:12.725943 kubelet[1922]: I0906 00:14:12.725874 1922 scope.go:117] "RemoveContainer" containerID="f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735" Sep 6 00:14:12.726122 env[1197]: time="2025-09-06T00:14:12.726022477Z" level=error msg="ContainerStatus for \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\": not found" Sep 6 00:14:12.726157 kubelet[1922]: E0906 00:14:12.726126 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\": not found" containerID="f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735" Sep 6 00:14:12.726157 kubelet[1922]: I0906 00:14:12.726143 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735"} err="failed to get container status \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\": rpc error: code = NotFound desc = an error occurred when try to find container \"f385a418dce220b694e4db39a86e9db7e92502e6388f447ce12a64a0a9e8c735\": not found" Sep 6 00:14:12.726249 kubelet[1922]: I0906 00:14:12.726173 1922 scope.go:117] "RemoveContainer" containerID="4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a" Sep 6 00:14:12.726423 env[1197]: time="2025-09-06T00:14:12.726365539Z" level=error msg="ContainerStatus for \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\": not found" Sep 6 00:14:12.726550 kubelet[1922]: E0906 00:14:12.726524 1922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\": not found" containerID="4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a" Sep 6 00:14:12.726591 kubelet[1922]: I0906 00:14:12.726563 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a"} err="failed to get container status \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c26580b240d7086ec94065285f0cb33dbcbd51ca88a249bc21aeb1ded8ecf5a\": not found" Sep 6 00:14:12.768987 sshd[3695]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:12.772625 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:42372.service: Deactivated successfully. Sep 6 00:14:12.773429 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:14:12.774579 systemd-logind[1183]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:14:12.775929 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:42374.service. Sep 6 00:14:12.781348 systemd-logind[1183]: Removed session 25. Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795757 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="mount-bpf-fs" Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795784 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="616d4cbb-c4e1-4687-b95c-af387fb37bc2" containerName="cilium-operator" Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795790 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="cilium-agent" Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795796 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="mount-cgroup" Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795801 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="apply-sysctl-overwrites" Sep 6 00:14:12.795800 kubelet[1922]: E0906 00:14:12.795807 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="clean-cilium-state" Sep 6 00:14:12.796224 kubelet[1922]: I0906 00:14:12.795836 1922 memory_manager.go:354] "RemoveStaleState removing state" podUID="616d4cbb-c4e1-4687-b95c-af387fb37bc2" containerName="cilium-operator" Sep 6 00:14:12.796224 kubelet[1922]: I0906 00:14:12.795841 1922 memory_manager.go:354] "RemoveStaleState removing state" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" containerName="cilium-agent" Sep 6 00:14:12.809372 systemd[1]: Created slice kubepods-burstable-pod7a5f1fbd_2bcc_4e3e_b613_e982d44aa95e.slice. Sep 6 00:14:12.819536 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 42374 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:12.820102 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:12.832152 systemd[1]: Started session-26.scope. Sep 6 00:14:12.832788 systemd-logind[1183]: New session 26 of user core. Sep 6 00:14:12.957845 sshd[3737]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:12.962173 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:42384.service. Sep 6 00:14:12.962876 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:42374.service: Deactivated successfully. Sep 6 00:14:12.963614 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:14:12.964606 systemd-logind[1183]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:14:12.965943 systemd-logind[1183]: Removed session 26. Sep 6 00:14:12.975691 kubelet[1922]: E0906 00:14:12.975616 1922 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z5plp lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-mc9f4" podUID="7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" Sep 6 00:14:12.980411 kubelet[1922]: I0906 00:14:12.980350 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5plp\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-kube-api-access-z5plp\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980411 kubelet[1922]: I0906 00:14:12.980396 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-bpf-maps\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980411 kubelet[1922]: I0906 00:14:12.980418 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-cgroup\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980435 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hubble-tls\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980451 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-run\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980467 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-kernel\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980478 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hostproc\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980490 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-etc-cni-netd\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980706 kubelet[1922]: I0906 00:14:12.980535 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-ipsec-secrets\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980560 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-lib-modules\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980588 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-clustermesh-secrets\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980617 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-config-path\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980636 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cni-path\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980653 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-xtables-lock\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.980901 kubelet[1922]: I0906 00:14:12.980669 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-net\") pod \"cilium-mc9f4\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " pod="kube-system/cilium-mc9f4" Sep 6 00:14:12.999740 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 42384 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:14:13.001070 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:13.004470 systemd-logind[1183]: New session 27 of user core. Sep 6 00:14:13.005459 systemd[1]: Started session-27.scope. Sep 6 00:14:13.394880 kubelet[1922]: I0906 00:14:13.394760 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="616d4cbb-c4e1-4687-b95c-af387fb37bc2" path="/var/lib/kubelet/pods/616d4cbb-c4e1-4687-b95c-af387fb37bc2/volumes" Sep 6 00:14:13.395197 kubelet[1922]: I0906 00:14:13.395177 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859a35b3-2b01-4a02-9dcb-98985e57e044" path="/var/lib/kubelet/pods/859a35b3-2b01-4a02-9dcb-98985e57e044/volumes" Sep 6 00:14:13.744424 kubelet[1922]: I0906 00:14:13.744366 1922 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:14:13Z","lastTransitionTime":"2025-09-06T00:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:14:13.785468 kubelet[1922]: I0906 00:14:13.785400 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-config-path\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785493 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-clustermesh-secrets\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785519 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-xtables-lock\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785534 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-net\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785550 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hubble-tls\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785567 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-lib-modules\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785693 kubelet[1922]: I0906 00:14:13.785590 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-run\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785603 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-etc-cni-netd\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785618 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-ipsec-secrets\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785615 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785631 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-bpf-maps\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785649 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.785843 kubelet[1922]: I0906 00:14:13.785689 1922 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.785981 kubelet[1922]: I0906 00:14:13.785705 1922 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.786155 kubelet[1922]: I0906 00:14:13.786113 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.786265 kubelet[1922]: I0906 00:14:13.786233 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.786314 kubelet[1922]: I0906 00:14:13.786272 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.786383 kubelet[1922]: I0906 00:14:13.786363 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.787106 kubelet[1922]: I0906 00:14:13.787074 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:14:13.788580 kubelet[1922]: I0906 00:14:13.788540 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:14:13.788635 kubelet[1922]: I0906 00:14:13.788600 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:14:13.789676 kubelet[1922]: I0906 00:14:13.789636 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:14:13.790693 systemd[1]: var-lib-kubelet-pods-7a5f1fbd\x2d2bcc\x2d4e3e\x2db613\x2de982d44aa95e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:14:13.790798 systemd[1]: var-lib-kubelet-pods-7a5f1fbd\x2d2bcc\x2d4e3e\x2db613\x2de982d44aa95e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:14:13.790869 systemd[1]: var-lib-kubelet-pods-7a5f1fbd\x2d2bcc\x2d4e3e\x2db613\x2de982d44aa95e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:14:13.886460 kubelet[1922]: I0906 00:14:13.886396 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-cgroup\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.886460 kubelet[1922]: I0906 00:14:13.886459 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5plp\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-kube-api-access-z5plp\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886477 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cni-path\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886502 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-kernel\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886520 1922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hostproc\") pod \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\" (UID: \"7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e\") " Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886558 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886570 1922 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886581 1922 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886704 kubelet[1922]: I0906 00:14:13.886594 1922 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886605 1922 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886616 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886626 1922 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886636 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886560 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.886858 kubelet[1922]: I0906 00:14:13.886646 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.887009 kubelet[1922]: I0906 00:14:13.886604 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.887009 kubelet[1922]: I0906 00:14:13.886626 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:14:13.889556 kubelet[1922]: I0906 00:14:13.889518 1922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-kube-api-access-z5plp" (OuterVolumeSpecName: "kube-api-access-z5plp") pod "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" (UID: "7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e"). InnerVolumeSpecName "kube-api-access-z5plp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:14:13.986833 kubelet[1922]: I0906 00:14:13.986766 1922 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.986833 kubelet[1922]: I0906 00:14:13.986816 1922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5plp\" (UniqueName: \"kubernetes.io/projected/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-kube-api-access-z5plp\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.986833 kubelet[1922]: I0906 00:14:13.986839 1922 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.986833 kubelet[1922]: I0906 00:14:13.986852 1922 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:13.987163 kubelet[1922]: I0906 00:14:13.986863 1922 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:14:14.085888 systemd[1]: var-lib-kubelet-pods-7a5f1fbd\x2d2bcc\x2d4e3e\x2db613\x2de982d44aa95e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5plp.mount: Deactivated successfully. Sep 6 00:14:14.709708 systemd[1]: Removed slice kubepods-burstable-pod7a5f1fbd_2bcc_4e3e_b613_e982d44aa95e.slice. Sep 6 00:14:14.751207 systemd[1]: Created slice kubepods-burstable-pod9703144d_bc9a_48d8_804f_baa055e4308c.slice. Sep 6 00:14:14.892708 kubelet[1922]: I0906 00:14:14.892623 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tmw\" (UniqueName: \"kubernetes.io/projected/9703144d-bc9a-48d8-804f-baa055e4308c-kube-api-access-84tmw\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.892708 kubelet[1922]: I0906 00:14:14.892694 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9703144d-bc9a-48d8-804f-baa055e4308c-cilium-ipsec-secrets\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.892708 kubelet[1922]: I0906 00:14:14.892719 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-host-proc-sys-net\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892743 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-host-proc-sys-kernel\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892767 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-cilium-run\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892786 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9703144d-bc9a-48d8-804f-baa055e4308c-cilium-config-path\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892803 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9703144d-bc9a-48d8-804f-baa055e4308c-hubble-tls\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892819 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-hostproc\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893245 kubelet[1922]: I0906 00:14:14.892837 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9703144d-bc9a-48d8-804f-baa055e4308c-clustermesh-secrets\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.892853 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-bpf-maps\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.892872 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-cilium-cgroup\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.892893 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-cni-path\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.892942 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-lib-modules\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.892986 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-etc-cni-netd\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:14.893392 kubelet[1922]: I0906 00:14:14.893029 1922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9703144d-bc9a-48d8-804f-baa055e4308c-xtables-lock\") pod \"cilium-trzlv\" (UID: \"9703144d-bc9a-48d8-804f-baa055e4308c\") " pod="kube-system/cilium-trzlv" Sep 6 00:14:15.055707 kubelet[1922]: E0906 00:14:15.055543 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:15.056570 env[1197]: time="2025-09-06T00:14:15.056531443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trzlv,Uid:9703144d-bc9a-48d8-804f-baa055e4308c,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:15.084987 env[1197]: time="2025-09-06T00:14:15.084900391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:15.084987 env[1197]: time="2025-09-06T00:14:15.084946438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:15.084987 env[1197]: time="2025-09-06T00:14:15.084959843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:15.085257 env[1197]: time="2025-09-06T00:14:15.085149383Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca pid=3779 runtime=io.containerd.runc.v2 Sep 6 00:14:15.098767 systemd[1]: run-containerd-runc-k8s.io-9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca-runc.YhvWdD.mount: Deactivated successfully. Sep 6 00:14:15.101507 systemd[1]: Started cri-containerd-9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca.scope. Sep 6 00:14:15.129891 env[1197]: time="2025-09-06T00:14:15.129831257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trzlv,Uid:9703144d-bc9a-48d8-804f-baa055e4308c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\"" Sep 6 00:14:15.130724 kubelet[1922]: E0906 00:14:15.130695 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:15.132897 env[1197]: time="2025-09-06T00:14:15.132865724Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:14:15.330320 env[1197]: time="2025-09-06T00:14:15.330158700Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f\"" Sep 6 00:14:15.330871 env[1197]: time="2025-09-06T00:14:15.330833150Z" level=info msg="StartContainer for \"68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f\"" Sep 6 00:14:15.343926 systemd[1]: Started cri-containerd-68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f.scope. Sep 6 00:14:15.393402 kubelet[1922]: E0906 00:14:15.393369 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:15.433942 systemd[1]: cri-containerd-68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f.scope: Deactivated successfully. Sep 6 00:14:15.443038 env[1197]: time="2025-09-06T00:14:15.442984777Z" level=info msg="StartContainer for \"68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f\" returns successfully" Sep 6 00:14:15.443533 kubelet[1922]: I0906 00:14:15.443491 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e" path="/var/lib/kubelet/pods/7a5f1fbd-2bcc-4e3e-b613-e982d44aa95e/volumes" Sep 6 00:14:15.540974 env[1197]: time="2025-09-06T00:14:15.540890489Z" level=info msg="shim disconnected" id=68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f Sep 6 00:14:15.540974 env[1197]: time="2025-09-06T00:14:15.540975621Z" level=warning msg="cleaning up after shim disconnected" id=68978d82db91a3d1a9f3b0a356b6fac8fa695c33290cdfe0ae28fe5c7193555f namespace=k8s.io Sep 6 00:14:15.541368 env[1197]: time="2025-09-06T00:14:15.541016379Z" level=info msg="cleaning up dead shim" Sep 6 00:14:15.548430 env[1197]: time="2025-09-06T00:14:15.548375289Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3865 runtime=io.containerd.runc.v2\n" Sep 6 00:14:15.709066 kubelet[1922]: E0906 00:14:15.707524 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:15.709678 env[1197]: time="2025-09-06T00:14:15.709345098Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:14:15.727691 env[1197]: time="2025-09-06T00:14:15.727620818Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192\"" Sep 6 00:14:15.728328 env[1197]: time="2025-09-06T00:14:15.728278064Z" level=info msg="StartContainer for \"7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192\"" Sep 6 00:14:15.743632 systemd[1]: Started cri-containerd-7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192.scope. Sep 6 00:14:15.767297 env[1197]: time="2025-09-06T00:14:15.767230740Z" level=info msg="StartContainer for \"7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192\" returns successfully" Sep 6 00:14:15.772131 systemd[1]: cri-containerd-7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192.scope: Deactivated successfully. Sep 6 00:14:15.795116 env[1197]: time="2025-09-06T00:14:15.795049744Z" level=info msg="shim disconnected" id=7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192 Sep 6 00:14:15.795116 env[1197]: time="2025-09-06T00:14:15.795111822Z" level=warning msg="cleaning up after shim disconnected" id=7619e888c8e7408c59ceb883392e6376d1970a7a9cc3593473b3dddbb53a5192 namespace=k8s.io Sep 6 00:14:15.795116 env[1197]: time="2025-09-06T00:14:15.795123725Z" level=info msg="cleaning up dead shim" Sep 6 00:14:15.802669 env[1197]: time="2025-09-06T00:14:15.802607702Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Sep 6 00:14:16.088111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296988365.mount: Deactivated successfully. Sep 6 00:14:16.453440 kubelet[1922]: E0906 00:14:16.453393 1922 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:14:16.711111 kubelet[1922]: E0906 00:14:16.710956 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:16.712864 env[1197]: time="2025-09-06T00:14:16.712815640Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:14:16.859376 env[1197]: time="2025-09-06T00:14:16.859282159Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34\"" Sep 6 00:14:16.860154 env[1197]: time="2025-09-06T00:14:16.860087497Z" level=info msg="StartContainer for \"08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34\"" Sep 6 00:14:16.881344 systemd[1]: Started cri-containerd-08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34.scope. Sep 6 00:14:16.910150 env[1197]: time="2025-09-06T00:14:16.910076977Z" level=info msg="StartContainer for \"08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34\" returns successfully" Sep 6 00:14:16.921035 systemd[1]: cri-containerd-08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34.scope: Deactivated successfully. Sep 6 00:14:17.088265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34-rootfs.mount: Deactivated successfully. Sep 6 00:14:17.208758 env[1197]: time="2025-09-06T00:14:17.208695368Z" level=info msg="shim disconnected" id=08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34 Sep 6 00:14:17.208758 env[1197]: time="2025-09-06T00:14:17.208755161Z" level=warning msg="cleaning up after shim disconnected" id=08435325643848edd3bf3dcad2d1a2aa3081f5fb51bb3e00c9cba74dddbd3a34 namespace=k8s.io Sep 6 00:14:17.208758 env[1197]: time="2025-09-06T00:14:17.208763848Z" level=info msg="cleaning up dead shim" Sep 6 00:14:17.215135 env[1197]: time="2025-09-06T00:14:17.215097237Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3981 runtime=io.containerd.runc.v2\n" Sep 6 00:14:17.717499 kubelet[1922]: E0906 00:14:17.714801 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:17.719761 env[1197]: time="2025-09-06T00:14:17.719680252Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:14:17.755881 env[1197]: time="2025-09-06T00:14:17.755794571Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca\"" Sep 6 00:14:17.756681 env[1197]: time="2025-09-06T00:14:17.756601421Z" level=info msg="StartContainer for \"337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca\"" Sep 6 00:14:17.777699 systemd[1]: Started cri-containerd-337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca.scope. Sep 6 00:14:17.805116 systemd[1]: cri-containerd-337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca.scope: Deactivated successfully. Sep 6 00:14:17.805889 env[1197]: time="2025-09-06T00:14:17.805841930Z" level=info msg="StartContainer for \"337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca\" returns successfully" Sep 6 00:14:17.830206 env[1197]: time="2025-09-06T00:14:17.830139843Z" level=info msg="shim disconnected" id=337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca Sep 6 00:14:17.830206 env[1197]: time="2025-09-06T00:14:17.830195018Z" level=warning msg="cleaning up after shim disconnected" id=337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca namespace=k8s.io Sep 6 00:14:17.830206 env[1197]: time="2025-09-06T00:14:17.830210757Z" level=info msg="cleaning up dead shim" Sep 6 00:14:17.836670 env[1197]: time="2025-09-06T00:14:17.836610222Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4035 runtime=io.containerd.runc.v2\n" Sep 6 00:14:18.088562 systemd[1]: run-containerd-runc-k8s.io-337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca-runc.gn28GM.mount: Deactivated successfully. Sep 6 00:14:18.088678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-337ab00335a36b900b8cf9b7b20e37521cbd21832ca8fb9de4da242a9861feca-rootfs.mount: Deactivated successfully. Sep 6 00:14:18.719261 kubelet[1922]: E0906 00:14:18.719232 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:18.720828 env[1197]: time="2025-09-06T00:14:18.720725972Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:14:19.179950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358908856.mount: Deactivated successfully. Sep 6 00:14:19.303133 env[1197]: time="2025-09-06T00:14:19.303035530Z" level=info msg="CreateContainer within sandbox \"9cb12cf55e41bc8fb24dabce6f382558034b1b94c4ad4b6fa11ef8f1508035ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a609c90194b1a4f40ca4c3a2ee9e6e14192b5a9db140c49e8a390d06dd843ef\"" Sep 6 00:14:19.303812 env[1197]: time="2025-09-06T00:14:19.303719006Z" level=info msg="StartContainer for \"2a609c90194b1a4f40ca4c3a2ee9e6e14192b5a9db140c49e8a390d06dd843ef\"" Sep 6 00:14:19.325127 systemd[1]: Started cri-containerd-2a609c90194b1a4f40ca4c3a2ee9e6e14192b5a9db140c49e8a390d06dd843ef.scope. Sep 6 00:14:19.354734 env[1197]: time="2025-09-06T00:14:19.354662150Z" level=info msg="StartContainer for \"2a609c90194b1a4f40ca4c3a2ee9e6e14192b5a9db140c49e8a390d06dd843ef\" returns successfully" Sep 6 00:14:19.676038 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:14:19.723904 kubelet[1922]: E0906 00:14:19.723853 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:19.745208 kubelet[1922]: I0906 00:14:19.745092 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-trzlv" podStartSLOduration=5.745075386 podStartE2EDuration="5.745075386s" podCreationTimestamp="2025-09-06 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:14:19.744806466 +0000 UTC m=+98.436186378" watchObservedRunningTime="2025-09-06 00:14:19.745075386 +0000 UTC m=+98.436455277" Sep 6 00:14:20.178016 systemd[1]: run-containerd-runc-k8s.io-2a609c90194b1a4f40ca4c3a2ee9e6e14192b5a9db140c49e8a390d06dd843ef-runc.sn5ufR.mount: Deactivated successfully. Sep 6 00:14:21.056235 kubelet[1922]: E0906 00:14:21.056199 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:22.393753 kubelet[1922]: E0906 00:14:22.393717 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:22.434087 systemd-networkd[1016]: lxc_health: Link UP Sep 6 00:14:22.451656 systemd-networkd[1016]: lxc_health: Gained carrier Sep 6 00:14:22.452021 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:14:23.057299 kubelet[1922]: E0906 00:14:23.057244 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:23.734770 kubelet[1922]: E0906 00:14:23.734735 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:24.322264 systemd-networkd[1016]: lxc_health: Gained IPv6LL Sep 6 00:14:24.737783 kubelet[1922]: E0906 00:14:24.737750 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:28.043135 sshd[3749]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:28.045418 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:42384.service: Deactivated successfully. Sep 6 00:14:28.046117 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 00:14:28.046574 systemd-logind[1183]: Session 27 logged out. Waiting for processes to exit. Sep 6 00:14:28.047177 systemd-logind[1183]: Removed session 27.