Nov 1 00:41:26.131604 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:41:26.131649 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:26.131661 kernel: BIOS-provided physical RAM map: Nov 1 00:41:26.131668 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:41:26.131676 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:41:26.131683 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:41:26.131692 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:41:26.131700 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:41:26.131711 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:41:26.131718 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:41:26.131726 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:41:26.131734 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:41:26.131742 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:41:26.131749 kernel: NX (Execute Disable) protection: active Nov 1 00:41:26.131762 kernel: SMBIOS 2.8 present. Nov 1 00:41:26.131770 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:41:26.131778 kernel: Hypervisor detected: KVM Nov 1 00:41:26.131786 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:41:26.131799 kernel: kvm-clock: cpu 0, msr 9b1a0001, primary cpu clock Nov 1 00:41:26.131807 kernel: kvm-clock: using sched offset of 4683355735 cycles Nov 1 00:41:26.131816 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:41:26.131825 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:41:26.131834 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:41:26.131845 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:41:26.131854 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:41:26.131863 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:41:26.131871 kernel: Using GB pages for direct mapping Nov 1 00:41:26.131880 kernel: ACPI: Early table checksum verification disabled Nov 1 00:41:26.131888 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:41:26.131897 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131906 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131915 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131926 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:41:26.131934 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131943 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131951 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131970 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:26.131979 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:41:26.131988 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:41:26.131997 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:41:26.132011 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:41:26.132020 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:41:26.132029 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:41:26.132039 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:41:26.132048 kernel: No NUMA configuration found Nov 1 00:41:26.132057 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:41:26.132068 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:41:26.132077 kernel: Zone ranges: Nov 1 00:41:26.132086 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:41:26.132096 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:41:26.132105 kernel: Normal empty Nov 1 00:41:26.132114 kernel: Movable zone start for each node Nov 1 00:41:26.132123 kernel: Early memory node ranges Nov 1 00:41:26.132132 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:41:26.132141 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:41:26.132150 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:41:26.132166 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:41:26.132175 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:41:26.132186 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:41:26.132196 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:41:26.132205 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:41:26.132213 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:41:26.132223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:41:26.132232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:41:26.132241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:41:26.132256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:41:26.132265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:41:26.132274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:41:26.132284 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:41:26.132293 kernel: TSC deadline timer available Nov 1 00:41:26.132302 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:41:26.132311 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:41:26.132320 kernel: kvm-guest: setup PV sched yield Nov 1 00:41:26.132329 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:41:26.132341 kernel: Booting paravirtualized kernel on KVM Nov 1 00:41:26.132387 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:41:26.132397 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:41:26.132406 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Nov 1 00:41:26.132415 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Nov 1 00:41:26.132424 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:41:26.132433 kernel: kvm-guest: setup async PF for cpu 0 Nov 1 00:41:26.132442 kernel: kvm-guest: stealtime: cpu 0, msr 9cc1c0c0 Nov 1 00:41:26.132451 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:41:26.132463 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:41:26.132472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:41:26.132481 kernel: Policy zone: DMA32 Nov 1 00:41:26.132492 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:26.132502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:41:26.132511 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:41:26.132520 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:41:26.132529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:41:26.132542 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 134796K reserved, 0K cma-reserved) Nov 1 00:41:26.132551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:41:26.132560 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:41:26.132569 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:41:26.132578 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:41:26.132588 kernel: rcu: RCU event tracing is enabled. Nov 1 00:41:26.132598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:41:26.132607 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:41:26.132616 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:41:26.132628 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:41:26.132637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:41:26.132647 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:41:26.132656 kernel: random: crng init done Nov 1 00:41:26.132665 kernel: Console: colour VGA+ 80x25 Nov 1 00:41:26.132674 kernel: printk: console [ttyS0] enabled Nov 1 00:41:26.132682 kernel: ACPI: Core revision 20210730 Nov 1 00:41:26.132691 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:41:26.132701 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:41:26.132712 kernel: x2apic enabled Nov 1 00:41:26.132721 kernel: Switched APIC routing to physical x2apic. Nov 1 00:41:26.132735 kernel: kvm-guest: setup PV IPIs Nov 1 00:41:26.132744 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:41:26.132753 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:41:26.132765 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:41:26.132775 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:41:26.132784 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:41:26.132794 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:41:26.132810 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:41:26.132820 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:41:26.132830 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:41:26.132841 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:41:26.132850 kernel: active return thunk: retbleed_return_thunk Nov 1 00:41:26.132860 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:41:26.132870 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:41:26.132880 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:41:26.132890 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:41:26.132901 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:41:26.132911 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:41:26.132921 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:41:26.132930 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:41:26.132940 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:41:26.132950 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:41:26.132969 kernel: LSM: Security Framework initializing Nov 1 00:41:26.132980 kernel: SELinux: Initializing. Nov 1 00:41:26.132990 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:41:26.133000 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:41:26.133010 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:41:26.133019 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:41:26.133029 kernel: ... version: 0 Nov 1 00:41:26.133039 kernel: ... bit width: 48 Nov 1 00:41:26.133048 kernel: ... generic registers: 6 Nov 1 00:41:26.133058 kernel: ... value mask: 0000ffffffffffff Nov 1 00:41:26.133070 kernel: ... max period: 00007fffffffffff Nov 1 00:41:26.133079 kernel: ... fixed-purpose events: 0 Nov 1 00:41:26.133089 kernel: ... event mask: 000000000000003f Nov 1 00:41:26.133099 kernel: signal: max sigframe size: 1776 Nov 1 00:41:26.133108 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:41:26.133118 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:41:26.133128 kernel: x86: Booting SMP configuration: Nov 1 00:41:26.133137 kernel: .... node #0, CPUs: #1 Nov 1 00:41:26.133147 kernel: kvm-clock: cpu 1, msr 9b1a0041, secondary cpu clock Nov 1 00:41:26.133158 kernel: kvm-guest: setup async PF for cpu 1 Nov 1 00:41:26.133168 kernel: kvm-guest: stealtime: cpu 1, msr 9cc9c0c0 Nov 1 00:41:26.133177 kernel: #2 Nov 1 00:41:26.133187 kernel: kvm-clock: cpu 2, msr 9b1a0081, secondary cpu clock Nov 1 00:41:26.133197 kernel: kvm-guest: setup async PF for cpu 2 Nov 1 00:41:26.133206 kernel: kvm-guest: stealtime: cpu 2, msr 9cd1c0c0 Nov 1 00:41:26.133219 kernel: #3 Nov 1 00:41:26.133229 kernel: kvm-clock: cpu 3, msr 9b1a00c1, secondary cpu clock Nov 1 00:41:26.133239 kernel: kvm-guest: setup async PF for cpu 3 Nov 1 00:41:26.133248 kernel: kvm-guest: stealtime: cpu 3, msr 9cd9c0c0 Nov 1 00:41:26.133259 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:41:26.133269 kernel: smpboot: Max logical packages: 1 Nov 1 00:41:26.133278 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:41:26.133288 kernel: devtmpfs: initialized Nov 1 00:41:26.133298 kernel: x86/mm: Memory block size: 128MB Nov 1 00:41:26.133308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:41:26.133318 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:41:26.133327 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:41:26.133337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:41:26.133378 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:41:26.133387 kernel: audit: type=2000 audit(1761957684.946:1): state=initialized audit_enabled=0 res=1 Nov 1 00:41:26.133397 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:41:26.133407 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:41:26.133416 kernel: cpuidle: using governor menu Nov 1 00:41:26.133426 kernel: ACPI: bus type PCI registered Nov 1 00:41:26.133436 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:41:26.133446 kernel: dca service started, version 1.12.1 Nov 1 00:41:26.133456 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:41:26.133467 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 00:41:26.133477 kernel: PCI: Using configuration type 1 for base access Nov 1 00:41:26.133487 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:41:26.133497 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:41:26.133507 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:41:26.133517 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:41:26.133526 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:41:26.133536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:41:26.133546 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:41:26.133557 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:41:26.133567 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:41:26.133577 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:41:26.133587 kernel: ACPI: Interpreter enabled Nov 1 00:41:26.133596 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:41:26.133606 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:41:26.133616 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:41:26.133625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:41:26.133635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:41:26.133835 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:41:26.133933 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:41:26.134041 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:41:26.134055 kernel: PCI host bridge to bus 0000:00 Nov 1 00:41:26.134152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:41:26.134237 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:41:26.134327 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:41:26.134430 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:41:26.134513 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:41:26.134594 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:41:26.134674 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:41:26.134779 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:41:26.134888 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:41:26.134998 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:41:26.135092 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:41:26.135183 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:41:26.135275 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:41:26.135398 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:41:26.135493 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:41:26.135590 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:41:26.135685 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:41:26.135787 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:41:26.135880 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:41:26.135980 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:41:26.136079 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:41:26.136179 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:41:26.136276 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:41:26.136407 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:41:26.136513 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:41:26.136614 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:41:26.136731 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:41:26.136833 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:41:26.136975 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:41:26.137081 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:41:26.137170 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:41:26.137293 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:41:26.137421 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:41:26.137436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:41:26.137446 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:41:26.137456 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:41:26.137466 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:41:26.137480 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:41:26.137489 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:41:26.137499 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:41:26.137508 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:41:26.137518 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:41:26.137528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:41:26.137537 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:41:26.137547 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:41:26.137556 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:41:26.137569 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:41:26.137578 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:41:26.137587 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:41:26.137597 kernel: iommu: Default domain type: Translated Nov 1 00:41:26.137606 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:41:26.137721 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:41:26.137834 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:41:26.137946 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:41:26.137975 kernel: vgaarb: loaded Nov 1 00:41:26.137985 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:41:26.137995 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:41:26.138004 kernel: PTP clock support registered Nov 1 00:41:26.138013 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:41:26.138023 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:41:26.138032 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:41:26.138042 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:41:26.138051 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:41:26.138064 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:41:26.138073 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:41:26.138082 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:41:26.138092 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:41:26.138101 kernel: pnp: PnP ACPI init Nov 1 00:41:26.138224 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:41:26.138240 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:41:26.138250 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:41:26.138262 kernel: NET: Registered PF_INET protocol family Nov 1 00:41:26.138271 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:41:26.138281 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:41:26.138292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:41:26.138301 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:41:26.138310 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:41:26.138320 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:41:26.138330 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:41:26.138340 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:41:26.138368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:41:26.138377 kernel: NET: Registered PF_XDP protocol family Nov 1 00:41:26.138483 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:41:26.138584 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:41:26.138686 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:41:26.138786 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:41:26.138886 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:41:26.139006 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:41:26.139025 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:41:26.139035 kernel: Initialise system trusted keyrings Nov 1 00:41:26.139045 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:41:26.139054 kernel: Key type asymmetric registered Nov 1 00:41:26.139064 kernel: Asymmetric key parser 'x509' registered Nov 1 00:41:26.139074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:41:26.139083 kernel: io scheduler mq-deadline registered Nov 1 00:41:26.139092 kernel: io scheduler kyber registered Nov 1 00:41:26.139110 kernel: io scheduler bfq registered Nov 1 00:41:26.139122 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:41:26.139132 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:41:26.139142 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:41:26.139151 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:41:26.139160 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:41:26.139171 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:41:26.139182 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:41:26.139193 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:41:26.139203 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:41:26.139331 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:41:26.139366 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:41:26.139493 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:41:26.139599 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:41:25 UTC (1761957685) Nov 1 00:41:26.139702 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:41:26.139715 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:41:26.139723 kernel: Segment Routing with IPv6 Nov 1 00:41:26.139731 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:41:26.139743 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:41:26.139751 kernel: Key type dns_resolver registered Nov 1 00:41:26.139759 kernel: IPI shorthand broadcast: enabled Nov 1 00:41:26.139767 kernel: sched_clock: Marking stable (654509527, 204810245)->(992524267, -133204495) Nov 1 00:41:26.139775 kernel: registered taskstats version 1 Nov 1 00:41:26.139783 kernel: Loading compiled-in X.509 certificates Nov 1 00:41:26.139791 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:41:26.139799 kernel: Key type .fscrypt registered Nov 1 00:41:26.139807 kernel: Key type fscrypt-provisioning registered Nov 1 00:41:26.139817 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:41:26.139825 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:41:26.139833 kernel: ima: No architecture policies found Nov 1 00:41:26.139841 kernel: clk: Disabling unused clocks Nov 1 00:41:26.139849 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:41:26.139857 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:41:26.139865 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:41:26.139873 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:41:26.139881 kernel: Run /init as init process Nov 1 00:41:26.139891 kernel: with arguments: Nov 1 00:41:26.139899 kernel: /init Nov 1 00:41:26.139907 kernel: with environment: Nov 1 00:41:26.139914 kernel: HOME=/ Nov 1 00:41:26.139922 kernel: TERM=linux Nov 1 00:41:26.139930 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:41:26.139940 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:41:26.139951 systemd[1]: Detected virtualization kvm. Nov 1 00:41:26.139995 systemd[1]: Detected architecture x86-64. Nov 1 00:41:26.140004 systemd[1]: Running in initrd. Nov 1 00:41:26.140012 systemd[1]: No hostname configured, using default hostname. Nov 1 00:41:26.140022 systemd[1]: Hostname set to . Nov 1 00:41:26.140031 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:41:26.140040 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:41:26.140049 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:41:26.140057 systemd[1]: Reached target cryptsetup.target. Nov 1 00:41:26.140068 systemd[1]: Reached target paths.target. Nov 1 00:41:26.140076 systemd[1]: Reached target slices.target. Nov 1 00:41:26.140095 systemd[1]: Reached target swap.target. Nov 1 00:41:26.140106 systemd[1]: Reached target timers.target. Nov 1 00:41:26.140115 systemd[1]: Listening on iscsid.socket. Nov 1 00:41:26.140124 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:41:26.140135 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:41:26.140145 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:41:26.140155 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:41:26.140163 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:41:26.140172 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:41:26.140181 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:41:26.140190 systemd[1]: Reached target sockets.target. Nov 1 00:41:26.140199 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:41:26.140208 systemd[1]: Finished network-cleanup.service. Nov 1 00:41:26.140219 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:41:26.140229 systemd[1]: Starting systemd-journald.service... Nov 1 00:41:26.140238 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:41:26.140248 systemd[1]: Starting systemd-resolved.service... Nov 1 00:41:26.140257 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:41:26.140267 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:41:26.140277 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:41:26.140286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:41:26.140298 kernel: audit: type=1130 audit(1761957686.131:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.140311 systemd-journald[198]: Journal started Nov 1 00:41:26.140383 systemd-journald[198]: Runtime Journal (/run/log/journal/c213cd6a84b641bcb4f82ae6ea5b5e37) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:41:26.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.141048 systemd-modules-load[199]: Inserted module 'overlay' Nov 1 00:41:26.225494 systemd[1]: Started systemd-journald.service. Nov 1 00:41:26.225526 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:41:26.225540 kernel: Bridge firewalling registered Nov 1 00:41:26.225552 kernel: SCSI subsystem initialized Nov 1 00:41:26.225562 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:41:26.225573 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:41:26.225584 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:41:26.169273 systemd-resolved[200]: Positive Trust Anchors: Nov 1 00:41:26.169300 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:41:26.169341 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:41:26.172549 systemd-resolved[200]: Defaulting to hostname 'linux'. Nov 1 00:41:26.177301 systemd-modules-load[199]: Inserted module 'br_netfilter' Nov 1 00:41:26.207515 systemd-modules-load[199]: Inserted module 'dm_multipath' Nov 1 00:41:26.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.244786 systemd[1]: Started systemd-resolved.service. Nov 1 00:41:26.252805 kernel: audit: type=1130 audit(1761957686.244:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.252827 kernel: audit: type=1130 audit(1761957686.252:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.252942 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:41:26.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.260953 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:41:26.269121 kernel: audit: type=1130 audit(1761957686.260:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.269142 kernel: audit: type=1130 audit(1761957686.269:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.269338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:41:26.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.277315 systemd[1]: Reached target nss-lookup.target. Nov 1 00:41:26.285401 kernel: audit: type=1130 audit(1761957686.277:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.286376 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:41:26.289860 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:26.301194 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:26.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.307374 kernel: audit: type=1130 audit(1761957686.300:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.309535 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:41:26.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.313640 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:41:26.319900 kernel: audit: type=1130 audit(1761957686.312:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.325758 dracut-cmdline[221]: dracut-dracut-053 Nov 1 00:41:26.328941 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:26.421384 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:41:26.438376 kernel: iscsi: registered transport (tcp) Nov 1 00:41:26.461746 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:41:26.461796 kernel: QLogic iSCSI HBA Driver Nov 1 00:41:26.490448 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:41:26.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.494673 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:41:26.502269 kernel: audit: type=1130 audit(1761957686.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.546405 kernel: raid6: avx2x4 gen() 26794 MB/s Nov 1 00:41:26.564400 kernel: raid6: avx2x4 xor() 6862 MB/s Nov 1 00:41:26.582392 kernel: raid6: avx2x2 gen() 24744 MB/s Nov 1 00:41:26.600385 kernel: raid6: avx2x2 xor() 14386 MB/s Nov 1 00:41:26.618382 kernel: raid6: avx2x1 gen() 24326 MB/s Nov 1 00:41:26.636391 kernel: raid6: avx2x1 xor() 14832 MB/s Nov 1 00:41:26.654410 kernel: raid6: sse2x4 gen() 14304 MB/s Nov 1 00:41:26.672401 kernel: raid6: sse2x4 xor() 6488 MB/s Nov 1 00:41:26.690391 kernel: raid6: sse2x2 gen() 15452 MB/s Nov 1 00:41:26.708376 kernel: raid6: sse2x2 xor() 9407 MB/s Nov 1 00:41:26.726377 kernel: raid6: sse2x1 gen() 11668 MB/s Nov 1 00:41:26.744839 kernel: raid6: sse2x1 xor() 7611 MB/s Nov 1 00:41:26.744863 kernel: raid6: using algorithm avx2x4 gen() 26794 MB/s Nov 1 00:41:26.744873 kernel: raid6: .... xor() 6862 MB/s, rmw enabled Nov 1 00:41:26.747302 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:41:26.760375 kernel: xor: automatically using best checksumming function avx Nov 1 00:41:26.860379 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:41:26.868547 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:41:26.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.871000 audit: BPF prog-id=7 op=LOAD Nov 1 00:41:26.871000 audit: BPF prog-id=8 op=LOAD Nov 1 00:41:26.872377 systemd[1]: Starting systemd-udevd.service... Nov 1 00:41:26.885151 systemd-udevd[399]: Using default interface naming scheme 'v252'. Nov 1 00:41:26.889279 systemd[1]: Started systemd-udevd.service. Nov 1 00:41:26.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.893815 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:41:26.906313 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Nov 1 00:41:26.935037 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:41:26.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:26.937887 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:41:26.975291 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:41:26.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:27.024379 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:41:27.089563 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:41:27.089580 kernel: libata version 3.00 loaded. Nov 1 00:41:27.089590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:41:27.089599 kernel: GPT:9289727 != 19775487 Nov 1 00:41:27.089608 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:41:27.089624 kernel: GPT:9289727 != 19775487 Nov 1 00:41:27.089632 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:41:27.089641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:41:27.089650 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:41:27.089659 kernel: AES CTR mode by8 optimization enabled Nov 1 00:41:27.095184 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:41:27.115192 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:41:27.115213 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:41:27.115321 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:41:27.115427 kernel: scsi host0: ahci Nov 1 00:41:27.115540 kernel: scsi host1: ahci Nov 1 00:41:27.115629 kernel: scsi host2: ahci Nov 1 00:41:27.115715 kernel: scsi host3: ahci Nov 1 00:41:27.115802 kernel: scsi host4: ahci Nov 1 00:41:27.115894 kernel: scsi host5: ahci Nov 1 00:41:27.116008 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:41:27.116018 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:41:27.116027 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:41:27.116036 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:41:27.116046 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:41:27.116057 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:41:27.120038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:41:27.177886 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (448) Nov 1 00:41:27.178681 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:41:27.180578 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:41:27.187439 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:41:27.197771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:41:27.200990 systemd[1]: Starting disk-uuid.service... Nov 1 00:41:27.212237 disk-uuid[529]: Primary Header is updated. Nov 1 00:41:27.212237 disk-uuid[529]: Secondary Entries is updated. Nov 1 00:41:27.212237 disk-uuid[529]: Secondary Header is updated. Nov 1 00:41:27.218704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:41:27.223538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:41:27.425409 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:41:27.425507 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:41:27.435294 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:41:27.435426 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:41:27.435481 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:41:27.437391 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:41:27.439401 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:41:27.441756 kernel: ata3.00: applying bridge limits Nov 1 00:41:27.443114 kernel: ata3.00: configured for UDMA/100 Nov 1 00:41:27.448710 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:41:27.478764 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:41:27.496503 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:41:27.496523 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:41:28.222375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:41:28.222990 disk-uuid[530]: The operation has completed successfully. Nov 1 00:41:28.249489 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:41:28.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.249588 systemd[1]: Finished disk-uuid.service. Nov 1 00:41:28.260281 systemd[1]: Starting verity-setup.service... Nov 1 00:41:28.277390 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:41:28.299174 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:41:28.303121 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:41:28.308269 systemd[1]: Finished verity-setup.service. Nov 1 00:41:28.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.369390 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:41:28.370207 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:41:28.370578 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:41:28.371837 systemd[1]: Starting ignition-setup.service... Nov 1 00:41:28.375891 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:41:28.388151 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:41:28.388186 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:41:28.388196 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:41:28.399616 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:41:28.445226 systemd[1]: Finished ignition-setup.service. Nov 1 00:41:28.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.447718 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:41:28.468744 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:41:28.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.471000 audit: BPF prog-id=9 op=LOAD Nov 1 00:41:28.472440 systemd[1]: Starting systemd-networkd.service... Nov 1 00:41:28.494817 systemd-networkd[714]: lo: Link UP Nov 1 00:41:28.494826 systemd-networkd[714]: lo: Gained carrier Nov 1 00:41:28.495318 systemd-networkd[714]: Enumeration completed Nov 1 00:41:28.495540 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:41:28.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.497242 systemd[1]: Started systemd-networkd.service. Nov 1 00:41:28.497730 systemd[1]: Reached target network.target. Nov 1 00:41:28.498823 systemd-networkd[714]: eth0: Link UP Nov 1 00:41:28.507917 ignition[696]: Ignition 2.14.0 Nov 1 00:41:28.498826 systemd-networkd[714]: eth0: Gained carrier Nov 1 00:41:28.507928 ignition[696]: Stage: fetch-offline Nov 1 00:41:28.503482 systemd[1]: Starting iscsiuio.service... Nov 1 00:41:28.508036 ignition[696]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:28.508533 systemd[1]: Started iscsiuio.service. Nov 1 00:41:28.508049 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:28.508186 ignition[696]: parsed url from cmdline: "" Nov 1 00:41:28.508191 ignition[696]: no config URL provided Nov 1 00:41:28.508197 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:41:28.508207 ignition[696]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:41:28.508230 ignition[696]: op(1): [started] loading QEMU firmware config module Nov 1 00:41:28.508236 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:41:28.511586 ignition[696]: op(1): [finished] loading QEMU firmware config module Nov 1 00:41:28.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.529458 systemd[1]: Starting iscsid.service... Nov 1 00:41:28.535123 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:41:28.535123 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:41:28.535123 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:41:28.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.537664 systemd[1]: Started iscsid.service. Nov 1 00:41:28.554636 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:41:28.554636 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:41:28.554636 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:41:28.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.540547 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:41:28.558715 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:41:28.561199 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:41:28.565421 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:41:28.567166 systemd[1]: Reached target remote-fs.target. Nov 1 00:41:28.569498 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:41:28.580170 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:41:28.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.641176 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:41:28.641657 ignition[696]: parsing config with SHA512: fea70c48623751e290b00e5be645f5f91fa2fda8bf3ff454e6b6d922a09e4f3eb22e68c4104a787b1274684d804f45b5799e0506697edd45d3788ab3a52f2bfe Nov 1 00:41:28.649843 unknown[696]: fetched base config from "system" Nov 1 00:41:28.650066 unknown[696]: fetched user config from "qemu" Nov 1 00:41:28.650536 ignition[696]: fetch-offline: fetch-offline passed Nov 1 00:41:28.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.709677 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:41:28.650585 ignition[696]: Ignition finished successfully Nov 1 00:41:28.712512 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:41:28.713839 systemd[1]: Starting ignition-kargs.service... Nov 1 00:41:28.729998 ignition[740]: Ignition 2.14.0 Nov 1 00:41:28.730009 ignition[740]: Stage: kargs Nov 1 00:41:28.730103 ignition[740]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:28.730112 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:28.735963 ignition[740]: kargs: kargs passed Nov 1 00:41:28.736007 ignition[740]: Ignition finished successfully Nov 1 00:41:28.739046 systemd[1]: Finished ignition-kargs.service. Nov 1 00:41:28.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.742591 systemd[1]: Starting ignition-disks.service... Nov 1 00:41:28.749654 ignition[746]: Ignition 2.14.0 Nov 1 00:41:28.749664 ignition[746]: Stage: disks Nov 1 00:41:28.749765 ignition[746]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:28.749774 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:28.750791 ignition[746]: disks: disks passed Nov 1 00:41:28.750828 ignition[746]: Ignition finished successfully Nov 1 00:41:28.758877 systemd[1]: Finished ignition-disks.service. Nov 1 00:41:28.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.761988 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:41:28.763552 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:41:28.763626 systemd[1]: Reached target local-fs.target. Nov 1 00:41:28.767756 systemd[1]: Reached target sysinit.target. Nov 1 00:41:28.767826 systemd[1]: Reached target basic.target. Nov 1 00:41:28.772802 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:41:28.788226 systemd-fsck[754]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:41:28.794339 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:41:28.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.795435 systemd[1]: Mounting sysroot.mount... Nov 1 00:41:28.805386 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:41:28.806240 systemd[1]: Mounted sysroot.mount. Nov 1 00:41:28.809192 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:41:28.814036 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:41:28.817281 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:41:28.820096 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:41:28.820144 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:41:28.827837 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:41:28.831612 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:41:28.837791 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:41:28.846787 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:41:28.852630 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:41:28.859618 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:41:28.902639 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:41:28.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.907118 systemd[1]: Starting ignition-mount.service... Nov 1 00:41:28.910998 systemd[1]: Starting sysroot-boot.service... Nov 1 00:41:28.914266 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:41:28.926438 ignition[807]: INFO : Ignition 2.14.0 Nov 1 00:41:28.926438 ignition[807]: INFO : Stage: mount Nov 1 00:41:28.931764 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:28.931764 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:28.931764 ignition[807]: INFO : mount: mount passed Nov 1 00:41:28.931764 ignition[807]: INFO : Ignition finished successfully Nov 1 00:41:28.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:28.929126 systemd[1]: Finished ignition-mount.service. Nov 1 00:41:28.937664 systemd[1]: Finished sysroot-boot.service. Nov 1 00:41:29.312386 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:41:29.324192 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Nov 1 00:41:29.324231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:41:29.324249 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:41:29.325580 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:41:29.331504 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:41:29.334106 systemd[1]: Starting ignition-files.service... Nov 1 00:41:29.352670 ignition[835]: INFO : Ignition 2.14.0 Nov 1 00:41:29.352670 ignition[835]: INFO : Stage: files Nov 1 00:41:29.355639 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:29.355639 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:29.361368 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:41:29.363841 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:41:29.363841 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:41:29.369302 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:41:29.372040 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:41:29.375381 unknown[835]: wrote ssh authorized keys file for user: core Nov 1 00:41:29.377339 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:41:29.380169 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:41:29.383683 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:41:29.431882 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:41:29.496536 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:41:29.499944 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:41:29.499944 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:41:29.561558 systemd-networkd[714]: eth0: Gained IPv6LL Nov 1 00:41:29.742778 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:41:30.161202 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:41:30.161202 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:41:30.167473 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:41:30.167473 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:41:30.167473 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:41:30.176291 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:41:30.179630 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:41:30.182771 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:41:30.186046 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:41:30.189133 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:41:30.192383 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:41:30.195442 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:30.199604 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:30.199604 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:30.208086 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 00:41:30.423722 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:41:31.499629 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:41:31.499629 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:41:31.507214 ignition[835]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:41:31.584594 ignition[835]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:41:31.588183 ignition[835]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:41:31.588183 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:41:31.595985 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:41:31.598944 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:41:31.602768 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:41:31.606178 ignition[835]: INFO : files: files passed Nov 1 00:41:31.607682 ignition[835]: INFO : Ignition finished successfully Nov 1 00:41:31.611057 systemd[1]: Finished ignition-files.service. Nov 1 00:41:31.622155 kernel: kauditd_printk_skb: 24 callbacks suppressed Nov 1 00:41:31.622180 kernel: audit: type=1130 audit(1761957691.612:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.614872 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:41:31.623948 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:41:31.630954 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:41:31.645731 kernel: audit: type=1130 audit(1761957691.632:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.645756 kernel: audit: type=1131 audit(1761957691.632:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.624731 systemd[1]: Starting ignition-quench.service... Nov 1 00:41:31.627767 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:41:31.627852 systemd[1]: Finished ignition-quench.service. Nov 1 00:41:31.651483 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:41:31.652223 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:41:31.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.658641 systemd[1]: Reached target ignition-complete.target. Nov 1 00:41:31.664371 kernel: audit: type=1130 audit(1761957691.658:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.668956 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:41:31.691859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:41:31.691986 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:41:31.727860 kernel: audit: type=1130 audit(1761957691.695:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.727939 kernel: audit: type=1131 audit(1761957691.695:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.695570 systemd[1]: Reached target initrd-fs.target. Nov 1 00:41:31.708305 systemd[1]: Reached target initrd.target. Nov 1 00:41:31.727883 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:41:31.729704 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:41:31.750043 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:41:31.758100 kernel: audit: type=1130 audit(1761957691.749:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.758127 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:41:31.772895 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:41:31.774995 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:41:31.778303 systemd[1]: Stopped target timers.target. Nov 1 00:41:31.781435 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:41:31.790794 kernel: audit: type=1131 audit(1761957691.783:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:31.781561 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:41:31.784338 systemd[1]: Stopped target initrd.target. Nov 1 00:41:32.003080 systemd[1]: Stopped target basic.target. Nov 1 00:41:32.006309 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:41:32.008082 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:41:32.010617 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:41:32.013583 systemd[1]: Stopped target remote-fs.target. Nov 1 00:41:32.016217 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:41:32.018958 systemd[1]: Stopped target sysinit.target. Nov 1 00:41:32.020208 systemd[1]: Stopped target local-fs.target. Nov 1 00:41:32.023930 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:41:32.025070 systemd[1]: Stopped target swap.target. Nov 1 00:41:32.037542 kernel: audit: type=1131 audit(1761957692.030:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.028650 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:41:32.028823 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:41:32.048170 kernel: audit: type=1131 audit(1761957692.041:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.031359 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:41:32.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.039110 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:41:32.039229 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:41:32.042004 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:41:32.042100 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:41:32.049720 systemd[1]: Stopped target paths.target. Nov 1 00:41:32.052017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:41:32.055427 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:41:32.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.058150 systemd[1]: Stopped target slices.target. Nov 1 00:41:32.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.059526 systemd[1]: Stopped target sockets.target. Nov 1 00:41:32.063065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:41:32.063209 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:41:32.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.079556 iscsid[725]: iscsid shutting down. Nov 1 00:41:32.066396 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:41:32.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.084467 ignition[875]: INFO : Ignition 2.14.0 Nov 1 00:41:32.084467 ignition[875]: INFO : Stage: umount Nov 1 00:41:32.084467 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:41:32.084467 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:41:32.084467 ignition[875]: INFO : umount: umount passed Nov 1 00:41:32.084467 ignition[875]: INFO : Ignition finished successfully Nov 1 00:41:32.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.066484 systemd[1]: Stopped ignition-files.service. Nov 1 00:41:32.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.070077 systemd[1]: Stopping ignition-mount.service... Nov 1 00:41:32.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.071526 systemd[1]: Stopping iscsid.service... Nov 1 00:41:32.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.073538 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:41:32.073655 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:41:32.079015 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:41:32.080650 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:41:32.080782 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:41:32.082995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:41:32.083096 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:41:32.087261 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:41:32.087342 systemd[1]: Stopped ignition-mount.service. Nov 1 00:41:32.092149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:41:32.092228 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:41:32.093790 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:41:32.093841 systemd[1]: Stopped ignition-disks.service. Nov 1 00:41:32.098239 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:41:32.098282 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:41:32.101013 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:41:32.101055 systemd[1]: Stopped ignition-setup.service. Nov 1 00:41:32.104642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:41:32.156655 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:41:32.156800 systemd[1]: Stopped iscsid.service. Nov 1 00:41:32.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.160536 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:41:32.160572 systemd[1]: Closed iscsid.socket. Nov 1 00:41:32.163249 systemd[1]: Stopping iscsiuio.service... Nov 1 00:41:32.168699 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:41:32.168866 systemd[1]: Stopped iscsiuio.service. Nov 1 00:41:32.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.172643 systemd[1]: Stopped target network.target. Nov 1 00:41:32.172752 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:41:32.172809 systemd[1]: Closed iscsiuio.socket. Nov 1 00:41:32.177774 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:41:32.178927 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:41:32.184412 systemd-networkd[714]: eth0: DHCPv6 lease lost Nov 1 00:41:32.186774 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:41:32.186924 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:41:32.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.191628 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:41:32.192000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:41:32.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.191722 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:41:32.196567 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:41:32.196600 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:41:32.197000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:41:32.201448 systemd[1]: Stopping network-cleanup.service... Nov 1 00:41:32.204175 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:41:32.206095 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:41:32.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.209041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:41:32.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.209084 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:41:32.212122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:41:32.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.212163 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:41:32.216623 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:41:32.221913 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:41:32.222779 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:41:32.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.222927 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:41:32.225595 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:41:32.225758 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:41:32.230623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:41:32.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.230684 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:41:32.232905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:41:32.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.232948 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:41:32.235529 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:41:32.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.235585 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:41:32.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.238600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:41:32.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.238649 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:41:32.241438 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:41:32.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:32.241486 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:41:32.244493 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:41:32.244546 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:41:32.248128 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:41:32.250520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:41:32.250579 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:41:32.253769 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:41:32.253892 systemd[1]: Stopped network-cleanup.service. Nov 1 00:41:32.256600 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:41:32.256702 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:41:32.259544 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:41:32.263533 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:41:32.277012 systemd[1]: Switching root. Nov 1 00:41:32.298295 systemd-journald[198]: Journal stopped Nov 1 00:41:36.973370 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Nov 1 00:41:36.973425 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:41:36.973442 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:41:36.973452 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:41:36.973465 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:41:36.973480 kernel: SELinux: policy capability open_perms=1 Nov 1 00:41:36.973499 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:41:36.973509 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:41:36.973519 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:41:36.973528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:41:36.973542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:41:36.973552 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:41:36.973563 systemd[1]: Successfully loaded SELinux policy in 51.405ms. Nov 1 00:41:36.973583 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.512ms. Nov 1 00:41:36.973599 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:41:36.973612 systemd[1]: Detected virtualization kvm. Nov 1 00:41:36.973625 systemd[1]: Detected architecture x86-64. Nov 1 00:41:36.973637 systemd[1]: Detected first boot. Nov 1 00:41:36.973650 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:41:36.973675 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:41:36.973688 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:41:36.973702 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:36.973713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:36.973724 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:36.973736 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 00:41:36.973745 kernel: audit: type=1334 audit(1761957696.719:85): prog-id=12 op=LOAD Nov 1 00:41:36.973756 kernel: audit: type=1334 audit(1761957696.719:86): prog-id=3 op=UNLOAD Nov 1 00:41:36.973773 kernel: audit: type=1334 audit(1761957696.722:87): prog-id=13 op=LOAD Nov 1 00:41:36.973782 kernel: audit: type=1334 audit(1761957696.726:88): prog-id=14 op=LOAD Nov 1 00:41:36.973792 kernel: audit: type=1334 audit(1761957696.726:89): prog-id=4 op=UNLOAD Nov 1 00:41:36.973801 kernel: audit: type=1334 audit(1761957696.726:90): prog-id=5 op=UNLOAD Nov 1 00:41:36.973810 kernel: audit: type=1334 audit(1761957696.730:91): prog-id=15 op=LOAD Nov 1 00:41:36.973820 kernel: audit: type=1334 audit(1761957696.730:92): prog-id=12 op=UNLOAD Nov 1 00:41:36.973829 kernel: audit: type=1334 audit(1761957696.731:93): prog-id=16 op=LOAD Nov 1 00:41:36.973839 kernel: audit: type=1334 audit(1761957696.735:94): prog-id=17 op=LOAD Nov 1 00:41:36.973851 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:41:36.973861 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:41:36.973872 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:41:36.973882 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:41:36.973896 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:41:36.973907 systemd[1]: Created slice system-getty.slice. Nov 1 00:41:36.973917 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:41:36.973928 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:41:36.973939 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:41:36.973951 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:41:36.973961 systemd[1]: Created slice user.slice. Nov 1 00:41:36.973976 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:41:36.973986 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:41:36.974003 systemd[1]: Set up automount boot.automount. Nov 1 00:41:36.974014 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:41:36.974024 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:41:36.974035 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:41:36.974047 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:41:36.974057 systemd[1]: Reached target integritysetup.target. Nov 1 00:41:36.974068 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:41:36.974078 systemd[1]: Reached target remote-fs.target. Nov 1 00:41:36.974088 systemd[1]: Reached target slices.target. Nov 1 00:41:36.974099 systemd[1]: Reached target swap.target. Nov 1 00:41:36.974109 systemd[1]: Reached target torcx.target. Nov 1 00:41:36.974119 systemd[1]: Reached target veritysetup.target. Nov 1 00:41:36.974129 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:41:36.974141 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:41:36.974151 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:41:36.974162 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:41:36.974172 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:41:36.974182 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:41:36.974193 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:41:36.974203 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:41:36.974213 systemd[1]: Mounting media.mount... Nov 1 00:41:36.974230 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:36.974241 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:41:36.974253 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:41:36.974263 systemd[1]: Mounting tmp.mount... Nov 1 00:41:36.974274 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:41:36.974284 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:36.974295 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:41:36.974306 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:41:36.974316 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:36.974326 systemd[1]: Starting modprobe@drm.service... Nov 1 00:41:36.974336 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:36.974436 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:41:36.974447 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:36.974458 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:41:36.974469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:41:36.974479 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:41:36.974488 kernel: fuse: init (API version 7.34) Nov 1 00:41:36.974498 kernel: loop: module loaded Nov 1 00:41:36.974508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:41:36.974520 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:41:36.974530 systemd[1]: Stopped systemd-journald.service. Nov 1 00:41:36.974549 systemd[1]: Starting systemd-journald.service... Nov 1 00:41:36.974560 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:41:36.974571 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:41:36.974581 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:41:36.974594 systemd-journald[997]: Journal started Nov 1 00:41:36.974634 systemd-journald[997]: Runtime Journal (/run/log/journal/c213cd6a84b641bcb4f82ae6ea5b5e37) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:41:32.378000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:41:32.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:41:32.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:41:32.738000 audit: BPF prog-id=10 op=LOAD Nov 1 00:41:32.738000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:41:32.738000 audit: BPF prog-id=11 op=LOAD Nov 1 00:41:32.738000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:41:32.778000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:41:32.778000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:32.778000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:41:32.804000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:41:32.804000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:32.804000 audit: CWD cwd="/" Nov 1 00:41:32.804000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:32.804000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:32.804000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:41:36.719000 audit: BPF prog-id=12 op=LOAD Nov 1 00:41:36.719000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:41:36.722000 audit: BPF prog-id=13 op=LOAD Nov 1 00:41:36.726000 audit: BPF prog-id=14 op=LOAD Nov 1 00:41:36.726000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:41:36.726000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:41:36.730000 audit: BPF prog-id=15 op=LOAD Nov 1 00:41:36.730000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:41:36.731000 audit: BPF prog-id=16 op=LOAD Nov 1 00:41:36.735000 audit: BPF prog-id=17 op=LOAD Nov 1 00:41:36.735000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:41:36.735000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:41:36.738000 audit: BPF prog-id=18 op=LOAD Nov 1 00:41:36.738000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:41:36.739000 audit: BPF prog-id=19 op=LOAD Nov 1 00:41:36.739000 audit: BPF prog-id=20 op=LOAD Nov 1 00:41:36.739000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:41:36.739000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:41:36.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.748000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:41:36.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.955000 audit: BPF prog-id=21 op=LOAD Nov 1 00:41:36.955000 audit: BPF prog-id=22 op=LOAD Nov 1 00:41:36.955000 audit: BPF prog-id=23 op=LOAD Nov 1 00:41:36.955000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:41:36.955000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:41:36.970000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:41:36.970000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffc65af3c0 a2=4000 a3=7fffc65af45c items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:36.970000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:41:32.776747 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:36.718226 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:41:32.777016 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:41:36.718241 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:41:32.777035 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:41:36.739883 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:41:32.777068 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:41:32.777077 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:41:32.777109 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:41:32.777121 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:41:32.777402 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:41:32.777437 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:41:32.777448 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:41:32.778112 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:41:32.778145 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:41:32.778162 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:41:32.778176 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:41:32.778192 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:41:32.778205 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:41:36.412384 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:36.977707 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:41:36.412877 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:36.413102 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:36.413496 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:41:36.413596 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:41:36.413749 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-11-01T00:41:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:41:36.981064 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:41:36.981100 systemd[1]: Stopped verity-setup.service. Nov 1 00:41:36.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.986377 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:36.990674 systemd[1]: Started systemd-journald.service. Nov 1 00:41:36.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:36.991552 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:41:36.993036 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:41:36.994430 systemd[1]: Mounted media.mount. Nov 1 00:41:36.995773 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:41:36.997268 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:41:36.998887 systemd[1]: Mounted tmp.mount. Nov 1 00:41:37.000317 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:41:37.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.002109 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:41:37.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.003835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:41:37.003962 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:41:37.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.005675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:37.005791 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:37.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.007784 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:41:37.007916 systemd[1]: Finished modprobe@drm.service. Nov 1 00:41:37.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.009632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:37.009775 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:37.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.011511 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:41:37.011628 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:41:37.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.016897 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:37.017011 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:37.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.018833 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:41:37.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.020640 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:41:37.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.022988 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:41:37.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.024731 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:41:37.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.026852 systemd[1]: Reached target network-pre.target. Nov 1 00:41:37.029962 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:41:37.033138 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:41:37.034617 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:41:37.035979 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:41:37.038540 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:41:37.040138 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:37.041305 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:41:37.043380 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:37.044555 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:37.046165 systemd-journald[997]: Time spent on flushing to /var/log/journal/c213cd6a84b641bcb4f82ae6ea5b5e37 is 15.088ms for 1119 entries. Nov 1 00:41:37.046165 systemd-journald[997]: System Journal (/var/log/journal/c213cd6a84b641bcb4f82ae6ea5b5e37) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:41:37.090810 systemd-journald[997]: Received client request to flush runtime journal. Nov 1 00:41:37.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.048934 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:41:37.051585 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:41:37.055560 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:41:37.092189 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:41:37.057628 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:41:37.059629 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:41:37.064090 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:41:37.071303 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:41:37.073335 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:37.092102 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:41:37.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.952025 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:41:37.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:37.954000 audit: BPF prog-id=24 op=LOAD Nov 1 00:41:37.954000 audit: BPF prog-id=25 op=LOAD Nov 1 00:41:37.954000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:41:37.954000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:41:37.955488 systemd[1]: Starting systemd-udevd.service... Nov 1 00:41:37.980367 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Nov 1 00:41:38.001145 systemd[1]: Started systemd-udevd.service. Nov 1 00:41:38.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.006000 audit: BPF prog-id=26 op=LOAD Nov 1 00:41:38.011000 audit: BPF prog-id=27 op=LOAD Nov 1 00:41:38.011000 audit: BPF prog-id=28 op=LOAD Nov 1 00:41:38.011000 audit: BPF prog-id=29 op=LOAD Nov 1 00:41:38.007249 systemd[1]: Starting systemd-networkd.service... Nov 1 00:41:38.013587 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:41:38.026966 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:41:38.049290 systemd[1]: Started systemd-userdbd.service. Nov 1 00:41:38.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.077547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:41:38.082380 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:41:38.095386 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:41:38.112019 systemd-networkd[1022]: lo: Link UP Nov 1 00:41:38.112030 systemd-networkd[1022]: lo: Gained carrier Nov 1 00:41:38.113044 systemd-networkd[1022]: Enumeration completed Nov 1 00:41:38.113170 systemd[1]: Started systemd-networkd.service. Nov 1 00:41:38.113197 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:41:38.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.116261 systemd-networkd[1022]: eth0: Link UP Nov 1 00:41:38.116274 systemd-networkd[1022]: eth0: Gained carrier Nov 1 00:41:38.101000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:41:38.101000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d8488c4fa0 a1=338ec a2=7f005c705bc5 a3=5 items=110 ppid=1014 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:38.101000 audit: CWD cwd="/" Nov 1 00:41:38.101000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=1 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=2 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=3 name=(null) inode=13049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=4 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=5 name=(null) inode=13050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=6 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=7 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=8 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=9 name=(null) inode=13052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=10 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=11 name=(null) inode=13053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=12 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=13 name=(null) inode=13054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=14 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=15 name=(null) inode=13055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=16 name=(null) inode=13051 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=17 name=(null) inode=13056 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=18 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=19 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=20 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=21 name=(null) inode=13058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=22 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=23 name=(null) inode=13059 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=24 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=25 name=(null) inode=13060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=26 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=27 name=(null) inode=13061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=28 name=(null) inode=13057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=29 name=(null) inode=13062 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=30 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=31 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=32 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=33 name=(null) inode=13064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=34 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=35 name=(null) inode=13065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=36 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=37 name=(null) inode=13066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=38 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=39 name=(null) inode=13067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=40 name=(null) inode=13063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=41 name=(null) inode=13068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=42 name=(null) inode=13048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=43 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=44 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=45 name=(null) inode=13070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=46 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=47 name=(null) inode=13071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=48 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=49 name=(null) inode=13072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=50 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=51 name=(null) inode=13073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=52 name=(null) inode=13069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=53 name=(null) inode=13074 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=55 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=56 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=57 name=(null) inode=13076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=58 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=59 name=(null) inode=13077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=60 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=61 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=62 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=63 name=(null) inode=13079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=64 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=65 name=(null) inode=13080 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=66 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=67 name=(null) inode=13081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=68 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=69 name=(null) inode=13082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=70 name=(null) inode=13078 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=71 name=(null) inode=13083 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=72 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=73 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=74 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=75 name=(null) inode=13085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=76 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=77 name=(null) inode=13086 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=78 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=79 name=(null) inode=13087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=80 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=81 name=(null) inode=13088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.129771 systemd-networkd[1022]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:41:38.132787 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:41:38.156659 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:41:38.156826 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:41:38.156930 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:41:38.156957 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:41:38.101000 audit: PATH item=82 name=(null) inode=13084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=83 name=(null) inode=13089 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=84 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=85 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=86 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=87 name=(null) inode=13091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=88 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=89 name=(null) inode=13092 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=90 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=91 name=(null) inode=13093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=92 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=93 name=(null) inode=13094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=94 name=(null) inode=13090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=95 name=(null) inode=13095 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=96 name=(null) inode=13075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=97 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=98 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=99 name=(null) inode=13097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=100 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=101 name=(null) inode=13098 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=102 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=103 name=(null) inode=13099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=104 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=105 name=(null) inode=13100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=106 name=(null) inode=13096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=107 name=(null) inode=13101 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PATH item=109 name=(null) inode=13102 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:41:38.101000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:41:38.209410 kernel: kvm: Nested Virtualization enabled Nov 1 00:41:38.209616 kernel: SVM: kvm: Nested Paging enabled Nov 1 00:41:38.209665 kernel: SVM: Virtual VMLOAD VMSAVE supported Nov 1 00:41:38.209701 kernel: SVM: Virtual GIF supported Nov 1 00:41:38.250391 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:41:38.276862 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:41:38.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.280147 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:41:38.290380 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:41:38.318457 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:41:38.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.320555 systemd[1]: Reached target cryptsetup.target. Nov 1 00:41:38.323810 systemd[1]: Starting lvm2-activation.service... Nov 1 00:41:38.327377 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:41:38.352426 systemd[1]: Finished lvm2-activation.service. Nov 1 00:41:38.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.354064 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:41:38.355462 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:41:38.355496 systemd[1]: Reached target local-fs.target. Nov 1 00:41:38.356832 systemd[1]: Reached target machines.target. Nov 1 00:41:38.359669 systemd[1]: Starting ldconfig.service... Nov 1 00:41:38.361204 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:38.361252 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:38.362459 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:41:38.364841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:41:38.367799 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:41:38.370646 systemd[1]: Starting systemd-sysext.service... Nov 1 00:41:38.372224 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1053 (bootctl) Nov 1 00:41:38.373641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:41:38.378968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:41:38.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.387766 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:41:38.393821 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:41:38.393987 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:41:38.407396 kernel: loop0: detected capacity change from 0 to 229808 Nov 1 00:41:38.419474 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Nov 1 00:41:38.419474 systemd-fsck[1061]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:41:38.421747 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:41:38.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.425589 systemd[1]: Mounting boot.mount... Nov 1 00:41:38.448687 systemd[1]: Mounted boot.mount. Nov 1 00:41:38.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:38.466082 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:41:39.249520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:41:39.252921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:41:39.254822 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:41:39.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.265379 kernel: loop1: detected capacity change from 0 to 229808 Nov 1 00:41:39.265677 ldconfig[1052]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:41:39.270043 systemd[1]: Finished ldconfig.service. Nov 1 00:41:39.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.272928 (sd-sysext)[1066]: Using extensions 'kubernetes'. Nov 1 00:41:39.273414 (sd-sysext)[1066]: Merged extensions into '/usr'. Nov 1 00:41:39.290130 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.291713 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:41:39.293180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.294647 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:39.297601 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:39.299817 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:39.301216 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.301343 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.301499 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.304337 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:41:39.306215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:39.306335 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:39.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.308192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:39.308313 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:39.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.310278 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:39.310402 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:39.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.312202 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:39.312306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.313213 systemd[1]: Finished systemd-sysext.service. Nov 1 00:41:39.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.315703 systemd[1]: Starting ensure-sysext.service... Nov 1 00:41:39.317700 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:41:39.322220 systemd[1]: Reloading. Nov 1 00:41:39.330209 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:41:39.332604 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:41:39.335633 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:41:39.384982 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-11-01T00:41:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:41:39.385388 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-11-01T00:41:39Z" level=info msg="torcx already run" Nov 1 00:41:39.446536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:41:39.446554 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:41:39.463696 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:41:39.516000 audit: BPF prog-id=30 op=LOAD Nov 1 00:41:39.516000 audit: BPF prog-id=31 op=LOAD Nov 1 00:41:39.516000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:41:39.516000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:41:39.516000 audit: BPF prog-id=32 op=LOAD Nov 1 00:41:39.516000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:41:39.516000 audit: BPF prog-id=33 op=LOAD Nov 1 00:41:39.517000 audit: BPF prog-id=34 op=LOAD Nov 1 00:41:39.517000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:41:39.517000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:41:39.517000 audit: BPF prog-id=35 op=LOAD Nov 1 00:41:39.517000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:41:39.517000 audit: BPF prog-id=36 op=LOAD Nov 1 00:41:39.517000 audit: BPF prog-id=37 op=LOAD Nov 1 00:41:39.517000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:41:39.517000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:41:39.518000 audit: BPF prog-id=38 op=LOAD Nov 1 00:41:39.518000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:41:39.524355 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:41:39.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.528973 systemd[1]: Starting audit-rules.service... Nov 1 00:41:39.531085 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:41:39.533868 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:41:39.536000 audit: BPF prog-id=39 op=LOAD Nov 1 00:41:39.537146 systemd[1]: Starting systemd-resolved.service... Nov 1 00:41:39.538000 audit: BPF prog-id=40 op=LOAD Nov 1 00:41:39.540037 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:41:39.542551 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:41:39.544672 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:41:39.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.546000 audit[1147]: SYSTEM_BOOT pid=1147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.551904 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:41:39.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.557080 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.557431 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.559386 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:39.562994 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:39.565887 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:39.567406 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.567559 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.569107 systemd[1]: Starting systemd-update-done.service... Nov 1 00:41:39.570819 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:41:39.570948 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.572396 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:41:39.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:39.575089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:39.575250 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:39.574000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:41:39.574000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe172f06a0 a2=420 a3=0 items=0 ppid=1136 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:41:39.574000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:41:39.575742 augenrules[1157]: No rules Nov 1 00:41:39.577670 systemd[1]: Finished audit-rules.service. Nov 1 00:41:39.579834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:39.579985 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:39.582397 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:39.582547 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:39.584786 systemd[1]: Finished systemd-update-done.service. Nov 1 00:41:39.591134 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.591409 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.593156 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:39.596130 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:39.599468 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:39.604441 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.604620 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.604800 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:41:39.604920 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.607314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:39.607650 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:39.610099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:39.610237 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:39.612701 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:39.612887 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:39.618033 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.618324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.619979 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:41:39.623103 systemd[1]: Starting modprobe@drm.service... Nov 1 00:41:39.625780 systemd-timesyncd[1146]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:41:39.625839 systemd-timesyncd[1146]: Initial clock synchronization to Sat 2025-11-01 00:41:39.844042 UTC. Nov 1 00:41:39.626723 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:41:39.631774 systemd[1]: Starting modprobe@loop.service... Nov 1 00:41:39.632527 systemd-resolved[1145]: Positive Trust Anchors: Nov 1 00:41:39.632798 systemd-resolved[1145]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:41:39.632892 systemd-resolved[1145]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:41:39.633440 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.633677 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.634882 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:41:39.637410 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:41:39.637606 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:41:39.639289 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:41:39.642157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:41:39.642332 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:41:39.644186 systemd-resolved[1145]: Defaulting to hostname 'linux'. Nov 1 00:41:39.644422 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:41:39.644535 systemd[1]: Finished modprobe@drm.service. Nov 1 00:41:39.646401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:41:39.646509 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:41:39.648360 systemd[1]: Started systemd-resolved.service. Nov 1 00:41:39.650673 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:41:39.650784 systemd[1]: Finished modprobe@loop.service. Nov 1 00:41:39.652973 systemd[1]: Reached target network.target. Nov 1 00:41:39.670145 systemd[1]: Reached target nss-lookup.target. Nov 1 00:41:39.671787 systemd[1]: Reached target time-set.target. Nov 1 00:41:39.673375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:41:39.673411 systemd[1]: Reached target sysinit.target. Nov 1 00:41:39.675319 systemd[1]: Started motdgen.path. Nov 1 00:41:39.676686 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:41:39.679895 systemd[1]: Started logrotate.timer. Nov 1 00:41:39.681886 systemd[1]: Started mdadm.timer. Nov 1 00:41:39.683219 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:41:39.684866 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:41:39.684896 systemd[1]: Reached target paths.target. Nov 1 00:41:39.686504 systemd[1]: Reached target timers.target. Nov 1 00:41:39.688263 systemd[1]: Listening on dbus.socket. Nov 1 00:41:39.690649 systemd[1]: Starting docker.socket... Nov 1 00:41:39.694018 systemd[1]: Listening on sshd.socket. Nov 1 00:41:39.695606 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.695693 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.696262 systemd[1]: Finished ensure-sysext.service. Nov 1 00:41:39.697911 systemd[1]: Listening on docker.socket. Nov 1 00:41:39.700195 systemd[1]: Reached target sockets.target. Nov 1 00:41:39.701728 systemd[1]: Reached target basic.target. Nov 1 00:41:39.703262 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.703285 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:41:39.704173 systemd[1]: Starting containerd.service... Nov 1 00:41:39.706511 systemd[1]: Starting dbus.service... Nov 1 00:41:39.709010 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:41:39.712266 systemd[1]: Starting extend-filesystems.service... Nov 1 00:41:39.716105 jq[1179]: false Nov 1 00:41:39.714171 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:41:39.715571 systemd[1]: Starting motdgen.service... Nov 1 00:41:39.718265 systemd[1]: Starting prepare-helm.service... Nov 1 00:41:39.721059 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:41:39.724242 systemd[1]: Starting sshd-keygen.service... Nov 1 00:41:39.729274 systemd[1]: Starting systemd-logind.service... Nov 1 00:41:39.731737 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:41:39.731880 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:41:39.732511 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:41:39.743763 jq[1198]: true Nov 1 00:41:39.733620 systemd[1]: Starting update-engine.service... Nov 1 00:41:39.737096 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:41:39.743502 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:41:39.743751 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:41:39.745159 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:41:39.745339 systemd[1]: Finished motdgen.service. Nov 1 00:41:39.748760 dbus-daemon[1178]: [system] SELinux support is enabled Nov 1 00:41:39.748980 systemd[1]: Started dbus.service. Nov 1 00:41:39.754523 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:41:39.754728 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:41:39.756455 extend-filesystems[1180]: Found loop1 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found sr0 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda1 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda2 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda3 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found usr Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda4 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda6 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda7 Nov 1 00:41:39.756455 extend-filesystems[1180]: Found vda9 Nov 1 00:41:39.756455 extend-filesystems[1180]: Checking size of /dev/vda9 Nov 1 00:41:39.895152 jq[1203]: true Nov 1 00:41:39.896790 update_engine[1196]: I1101 00:41:39.896178 1196 main.cc:92] Flatcar Update Engine starting Nov 1 00:41:39.901260 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:41:39.901302 systemd[1]: Reached target system-config.target. Nov 1 00:41:39.903280 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:41:39.903311 systemd[1]: Reached target user-config.target. Nov 1 00:41:39.904416 extend-filesystems[1180]: Resized partition /dev/vda9 Nov 1 00:41:39.907287 extend-filesystems[1210]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:41:39.911369 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:41:39.915980 update_engine[1196]: I1101 00:41:39.913524 1196 update_check_scheduler.cc:74] Next update check in 6m19s Nov 1 00:41:39.913888 systemd[1]: Started update-engine.service. Nov 1 00:41:39.917733 tar[1200]: linux-amd64/LICENSE Nov 1 00:41:39.917733 tar[1200]: linux-amd64/helm Nov 1 00:41:39.976669 systemd-networkd[1022]: eth0: Gained IPv6LL Nov 1 00:41:39.977555 systemd[1]: Started locksmithd.service. Nov 1 00:41:39.980685 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:41:39.990063 systemd[1]: Reached target network-online.target. Nov 1 00:41:40.065160 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:41:39.993263 systemd[1]: Starting kubelet.service... Nov 1 00:41:40.088731 systemd-logind[1194]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:41:40.088753 systemd-logind[1194]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:41:40.089036 systemd-logind[1194]: New seat seat0. Nov 1 00:41:40.090157 extend-filesystems[1210]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:41:40.090157 extend-filesystems[1210]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:41:40.090157 extend-filesystems[1210]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:41:40.107639 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Nov 1 00:41:40.109328 env[1205]: time="2025-11-01T00:41:40.100897614Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:41:40.092417 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:41:40.092743 systemd[1]: Finished extend-filesystems.service. Nov 1 00:41:40.100057 systemd[1]: Started systemd-logind.service. Nov 1 00:41:40.113731 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:41:40.114534 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:41:40.121686 env[1205]: time="2025-11-01T00:41:40.121632902Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:41:40.122003 env[1205]: time="2025-11-01T00:41:40.121978803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.123770 env[1205]: time="2025-11-01T00:41:40.123731203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:40.123901 env[1205]: time="2025-11-01T00:41:40.123875686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.124299 env[1205]: time="2025-11-01T00:41:40.124251930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:40.124442 env[1205]: time="2025-11-01T00:41:40.124416626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.124559 env[1205]: time="2025-11-01T00:41:40.124532312Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:41:40.124655 env[1205]: time="2025-11-01T00:41:40.124631055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.124839 env[1205]: time="2025-11-01T00:41:40.124814926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.125279 env[1205]: time="2025-11-01T00:41:40.125255558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:41:40.125590 env[1205]: time="2025-11-01T00:41:40.125565149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:41:40.125677 env[1205]: time="2025-11-01T00:41:40.125652541Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:41:40.125813 env[1205]: time="2025-11-01T00:41:40.125792700Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:41:40.125901 env[1205]: time="2025-11-01T00:41:40.125881204Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136720099Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136806975Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136821591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136889046Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136904094Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136916423Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136977014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.136991495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137017709Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137035350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137047670Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137063016Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137236409Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:41:40.139702 env[1205]: time="2025-11-01T00:41:40.137377558Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137754882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137790936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137821987Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137921122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137936972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.137951278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138070453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138090347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138106362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138131032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138141757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138155395Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138300217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138318599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140300 env[1205]: time="2025-11-01T00:41:40.138338494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140778 env[1205]: time="2025-11-01T00:41:40.138368217Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:41:40.140778 env[1205]: time="2025-11-01T00:41:40.138385241Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:41:40.140778 env[1205]: time="2025-11-01T00:41:40.138396099Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:41:40.140778 env[1205]: time="2025-11-01T00:41:40.138424259Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:41:40.140778 env[1205]: time="2025-11-01T00:41:40.138479806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.138766456Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.138837884Z" level=info msg="Connect containerd service" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.138886926Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139625169Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139705315Z" level=info msg="Start subscribing containerd event" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139765319Z" level=info msg="Start recovering state" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139828122Z" level=info msg="Start event monitor" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139842202Z" level=info msg="Start snapshots syncer" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139853153Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.139859936Z" level=info msg="Start streaming server" Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.140375990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:41:40.140950 env[1205]: time="2025-11-01T00:41:40.140417107Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:41:40.144629 env[1205]: time="2025-11-01T00:41:40.144596039Z" level=info msg="containerd successfully booted in 0.044407s" Nov 1 00:41:40.144751 systemd[1]: Started containerd.service. Nov 1 00:41:40.162453 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:41:40.592156 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:41:40.614453 systemd[1]: Finished sshd-keygen.service. Nov 1 00:41:40.617726 systemd[1]: Starting issuegen.service... Nov 1 00:41:40.625627 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:41:40.625775 systemd[1]: Finished issuegen.service. Nov 1 00:41:40.628661 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:41:40.636107 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:41:40.639057 systemd[1]: Started getty@tty1.service. Nov 1 00:41:40.641698 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:41:40.643345 systemd[1]: Reached target getty.target. Nov 1 00:41:40.748294 tar[1200]: linux-amd64/README.md Nov 1 00:41:40.754531 systemd[1]: Finished prepare-helm.service. Nov 1 00:41:41.064093 systemd[1]: Started kubelet.service. Nov 1 00:41:41.066032 systemd[1]: Reached target multi-user.target. Nov 1 00:41:41.071191 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:41:41.078741 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:41:41.078942 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:41:41.081120 systemd[1]: Startup finished in 1.088s (kernel) + 6.378s (initrd) + 8.756s (userspace) = 16.223s. Nov 1 00:41:41.762242 kubelet[1259]: E1101 00:41:41.762162 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:41:41.764148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:41:41.764309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:41:41.764604 systemd[1]: kubelet.service: Consumed 1.431s CPU time. Nov 1 00:41:49.188805 systemd[1]: Created slice system-sshd.slice. Nov 1 00:41:49.189968 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:39558.service. Nov 1 00:41:49.222818 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:41:49.224405 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.232525 systemd[1]: Created slice user-500.slice. Nov 1 00:41:49.233805 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:41:49.235471 systemd-logind[1194]: New session 1 of user core. Nov 1 00:41:49.241807 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:41:49.243003 systemd[1]: Starting user@500.service... Nov 1 00:41:49.246853 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.334254 systemd[1271]: Queued start job for default target default.target. Nov 1 00:41:49.334700 systemd[1271]: Reached target paths.target. Nov 1 00:41:49.334720 systemd[1271]: Reached target sockets.target. Nov 1 00:41:49.334732 systemd[1271]: Reached target timers.target. Nov 1 00:41:49.334742 systemd[1271]: Reached target basic.target. Nov 1 00:41:49.334785 systemd[1271]: Reached target default.target. Nov 1 00:41:49.334808 systemd[1271]: Startup finished in 79ms. Nov 1 00:41:49.334927 systemd[1]: Started user@500.service. Nov 1 00:41:49.336941 systemd[1]: Started session-1.scope. Nov 1 00:41:49.390405 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:39560.service. Nov 1 00:41:49.421538 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:41:49.423132 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.427233 systemd-logind[1194]: New session 2 of user core. Nov 1 00:41:49.428260 systemd[1]: Started session-2.scope. Nov 1 00:41:49.483298 sshd[1280]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:49.487188 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:39560.service: Deactivated successfully. Nov 1 00:41:49.488007 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:41:49.488689 systemd-logind[1194]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:41:49.490183 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:39572.service. Nov 1 00:41:49.491056 systemd-logind[1194]: Removed session 2. Nov 1 00:41:49.519099 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:41:49.520310 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.523892 systemd-logind[1194]: New session 3 of user core. Nov 1 00:41:49.524698 systemd[1]: Started session-3.scope. Nov 1 00:41:49.576546 sshd[1286]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:49.579041 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:39572.service: Deactivated successfully. Nov 1 00:41:49.579580 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:41:49.580081 systemd-logind[1194]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:41:49.580942 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:39584.service. Nov 1 00:41:49.581883 systemd-logind[1194]: Removed session 3. Nov 1 00:41:49.609732 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 39584 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:41:49.610769 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.614391 systemd-logind[1194]: New session 4 of user core. Nov 1 00:41:49.615240 systemd[1]: Started session-4.scope. Nov 1 00:41:49.670331 sshd[1292]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:49.673057 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:39584.service: Deactivated successfully. Nov 1 00:41:49.673609 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:41:49.674098 systemd-logind[1194]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:41:49.675301 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:39596.service. Nov 1 00:41:49.676051 systemd-logind[1194]: Removed session 4. Nov 1 00:41:49.703446 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 39596 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:41:49.704631 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:49.708621 systemd-logind[1194]: New session 5 of user core. Nov 1 00:41:49.709421 systemd[1]: Started session-5.scope. Nov 1 00:41:49.767022 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:41:49.767244 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:41:49.794636 systemd[1]: Starting docker.service... Nov 1 00:41:49.836709 env[1313]: time="2025-11-01T00:41:49.836631057Z" level=info msg="Starting up" Nov 1 00:41:49.838602 env[1313]: time="2025-11-01T00:41:49.838562401Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:41:49.838602 env[1313]: time="2025-11-01T00:41:49.838584522Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:41:49.838713 env[1313]: time="2025-11-01T00:41:49.838609189Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:41:49.838713 env[1313]: time="2025-11-01T00:41:49.838630704Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:41:49.841264 env[1313]: time="2025-11-01T00:41:49.841211460Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:41:49.841264 env[1313]: time="2025-11-01T00:41:49.841259299Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:41:49.841391 env[1313]: time="2025-11-01T00:41:49.841280946Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:41:49.841391 env[1313]: time="2025-11-01T00:41:49.841301188Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:41:49.848393 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1593726338-merged.mount: Deactivated successfully. Nov 1 00:41:50.185807 env[1313]: time="2025-11-01T00:41:50.185721671Z" level=info msg="Loading containers: start." Nov 1 00:41:50.692383 kernel: Initializing XFRM netlink socket Nov 1 00:41:50.724331 env[1313]: time="2025-11-01T00:41:50.724269643Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:41:50.801803 systemd-networkd[1022]: docker0: Link UP Nov 1 00:41:50.817796 env[1313]: time="2025-11-01T00:41:50.817741372Z" level=info msg="Loading containers: done." Nov 1 00:41:50.831158 env[1313]: time="2025-11-01T00:41:50.831085637Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:41:50.831415 env[1313]: time="2025-11-01T00:41:50.831310741Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:41:50.831474 env[1313]: time="2025-11-01T00:41:50.831444111Z" level=info msg="Daemon has completed initialization" Nov 1 00:41:50.850974 systemd[1]: Started docker.service. Nov 1 00:41:50.857138 env[1313]: time="2025-11-01T00:41:50.857036564Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:41:51.808264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:41:51.808483 systemd[1]: Stopped kubelet.service. Nov 1 00:41:51.808529 systemd[1]: kubelet.service: Consumed 1.431s CPU time. Nov 1 00:41:51.810340 systemd[1]: Starting kubelet.service... Nov 1 00:41:51.919959 systemd[1]: Started kubelet.service. Nov 1 00:41:52.049369 kubelet[1446]: E1101 00:41:52.049291 1446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:41:52.052963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:41:52.053123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:41:52.166955 env[1205]: time="2025-11-01T00:41:52.166793489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:41:53.215449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340628260.mount: Deactivated successfully. Nov 1 00:41:56.195524 env[1205]: time="2025-11-01T00:41:56.195443725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:56.200244 env[1205]: time="2025-11-01T00:41:56.200155911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:56.202562 env[1205]: time="2025-11-01T00:41:56.202497913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:56.205359 env[1205]: time="2025-11-01T00:41:56.205309188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:56.206761 env[1205]: time="2025-11-01T00:41:56.206718490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 00:41:56.207417 env[1205]: time="2025-11-01T00:41:56.207383418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:41:58.825371 env[1205]: time="2025-11-01T00:41:58.825231308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:58.828258 env[1205]: time="2025-11-01T00:41:58.828216278Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:58.830961 env[1205]: time="2025-11-01T00:41:58.830906383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:58.832980 env[1205]: time="2025-11-01T00:41:58.832899035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:41:58.833705 env[1205]: time="2025-11-01T00:41:58.833664039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 00:41:58.834331 env[1205]: time="2025-11-01T00:41:58.834306261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:42:00.615036 env[1205]: time="2025-11-01T00:42:00.614960755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:00.616987 env[1205]: time="2025-11-01T00:42:00.616938015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:00.619237 env[1205]: time="2025-11-01T00:42:00.619180109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:00.621416 env[1205]: time="2025-11-01T00:42:00.621360150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:00.622131 env[1205]: time="2025-11-01T00:42:00.622087573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 00:42:00.622761 env[1205]: time="2025-11-01T00:42:00.622733450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:42:02.058495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:42:02.058749 systemd[1]: Stopped kubelet.service. Nov 1 00:42:02.060627 systemd[1]: Starting kubelet.service... Nov 1 00:42:02.158054 systemd[1]: Started kubelet.service. Nov 1 00:42:02.384238 kubelet[1459]: E1101 00:42:02.384069 1459 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:02.386264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:02.386427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:02.962069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363869243.mount: Deactivated successfully. Nov 1 00:42:05.217904 env[1205]: time="2025-11-01T00:42:05.217813057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:05.221276 env[1205]: time="2025-11-01T00:42:05.221122311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:05.226565 env[1205]: time="2025-11-01T00:42:05.224220575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:05.234341 env[1205]: time="2025-11-01T00:42:05.230227925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:05.234341 env[1205]: time="2025-11-01T00:42:05.230566116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 00:42:05.234341 env[1205]: time="2025-11-01T00:42:05.233656828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:42:06.115695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107444428.mount: Deactivated successfully. Nov 1 00:42:10.418298 env[1205]: time="2025-11-01T00:42:10.418192705Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:10.620080 env[1205]: time="2025-11-01T00:42:10.619977826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:10.712399 env[1205]: time="2025-11-01T00:42:10.712227966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:10.899595 env[1205]: time="2025-11-01T00:42:10.899492526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:10.900703 env[1205]: time="2025-11-01T00:42:10.900650601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 00:42:10.901489 env[1205]: time="2025-11-01T00:42:10.901421018Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:42:12.558523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:42:12.558804 systemd[1]: Stopped kubelet.service. Nov 1 00:42:12.560859 systemd[1]: Starting kubelet.service... Nov 1 00:42:12.660503 systemd[1]: Started kubelet.service. Nov 1 00:42:12.694884 kubelet[1471]: E1101 00:42:12.694820 1471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:12.696834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:12.696966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:16.143209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750298746.mount: Deactivated successfully. Nov 1 00:42:16.151473 env[1205]: time="2025-11-01T00:42:16.151408885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.153912 env[1205]: time="2025-11-01T00:42:16.153854855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.156625 env[1205]: time="2025-11-01T00:42:16.156587312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.158400 env[1205]: time="2025-11-01T00:42:16.158362960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.159050 env[1205]: time="2025-11-01T00:42:16.158938349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:42:16.159823 env[1205]: time="2025-11-01T00:42:16.159796305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:42:16.917304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289019108.mount: Deactivated successfully. Nov 1 00:42:21.868429 env[1205]: time="2025-11-01T00:42:21.868312282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:21.881113 env[1205]: time="2025-11-01T00:42:21.881029995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:21.891879 env[1205]: time="2025-11-01T00:42:21.891737271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:21.897498 env[1205]: time="2025-11-01T00:42:21.897435596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:21.898867 env[1205]: time="2025-11-01T00:42:21.898787575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 00:42:22.808413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:42:22.808609 systemd[1]: Stopped kubelet.service. Nov 1 00:42:22.810361 systemd[1]: Starting kubelet.service... Nov 1 00:42:22.908148 systemd[1]: Started kubelet.service. Nov 1 00:42:23.538117 kubelet[1505]: E1101 00:42:23.538030 1505 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:23.540539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:23.540700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:24.859799 update_engine[1196]: I1101 00:42:24.859686 1196 update_attempter.cc:509] Updating boot flags... Nov 1 00:42:25.794907 systemd[1]: Stopped kubelet.service. Nov 1 00:42:25.797070 systemd[1]: Starting kubelet.service... Nov 1 00:42:25.822022 systemd[1]: Reloading. Nov 1 00:42:25.901709 /usr/lib/systemd/system-generators/torcx-generator[1553]: time="2025-11-01T00:42:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:25.902090 /usr/lib/systemd/system-generators/torcx-generator[1553]: time="2025-11-01T00:42:25Z" level=info msg="torcx already run" Nov 1 00:42:26.541163 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:26.541187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:26.560373 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:26.648853 systemd[1]: Started kubelet.service. Nov 1 00:42:26.650488 systemd[1]: Stopping kubelet.service... Nov 1 00:42:26.651185 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:42:26.651399 systemd[1]: Stopped kubelet.service. Nov 1 00:42:26.653130 systemd[1]: Starting kubelet.service... Nov 1 00:42:26.754582 systemd[1]: Started kubelet.service. Nov 1 00:42:26.888067 kubelet[1603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:26.888067 kubelet[1603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:26.888067 kubelet[1603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:26.888532 kubelet[1603]: I1101 00:42:26.888109 1603 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:27.221421 kubelet[1603]: I1101 00:42:27.221297 1603 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:42:28.846703 kubelet[1603]: I1101 00:42:27.221693 1603 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:28.846703 kubelet[1603]: I1101 00:42:27.222296 1603 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:42:29.083372 kubelet[1603]: I1101 00:42:29.083302 1603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:29.127942 kubelet[1603]: E1101 00:42:29.127829 1603 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:42:29.161413 kubelet[1603]: E1101 00:42:29.161367 1603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:29.161413 kubelet[1603]: I1101 00:42:29.161406 1603 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:29.165998 kubelet[1603]: I1101 00:42:29.165974 1603 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:29.166195 kubelet[1603]: I1101 00:42:29.166158 1603 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:29.167600 kubelet[1603]: I1101 00:42:29.166190 1603 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:42:29.167755 kubelet[1603]: I1101 00:42:29.167602 1603 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:29.167755 kubelet[1603]: I1101 00:42:29.167614 1603 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:42:29.167808 kubelet[1603]: I1101 00:42:29.167761 1603 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:29.169372 kubelet[1603]: I1101 00:42:29.169336 1603 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:42:29.169372 kubelet[1603]: I1101 00:42:29.169368 1603 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:29.169460 kubelet[1603]: I1101 00:42:29.169386 1603 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:42:29.169460 kubelet[1603]: I1101 00:42:29.169434 1603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:29.200262 kubelet[1603]: E1101 00:42:29.200186 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:42:29.200588 kubelet[1603]: E1101 00:42:29.200547 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:42:29.246272 kubelet[1603]: I1101 00:42:29.246212 1603 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:29.246883 kubelet[1603]: I1101 00:42:29.246846 1603 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:42:29.251032 kubelet[1603]: W1101 00:42:29.250988 1603 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:42:29.253527 kubelet[1603]: I1101 00:42:29.253496 1603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:29.253606 kubelet[1603]: I1101 00:42:29.253552 1603 server.go:1289] "Started kubelet" Nov 1 00:42:29.257605 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:42:29.257779 kubelet[1603]: I1101 00:42:29.257239 1603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:29.257779 kubelet[1603]: I1101 00:42:29.257771 1603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:29.259026 kubelet[1603]: I1101 00:42:29.258735 1603 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:42:29.259312 kubelet[1603]: I1101 00:42:29.257613 1603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:29.259647 kubelet[1603]: I1101 00:42:29.259612 1603 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:29.260206 kubelet[1603]: I1101 00:42:29.260055 1603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:29.261783 kubelet[1603]: E1101 00:42:29.261748 1603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:42:29.261783 kubelet[1603]: I1101 00:42:29.261785 1603 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:29.261961 kubelet[1603]: I1101 00:42:29.261943 1603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:29.262009 kubelet[1603]: I1101 00:42:29.261991 1603 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:29.262465 kubelet[1603]: E1101 00:42:29.262420 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:42:29.262465 kubelet[1603]: E1101 00:42:29.262428 1603 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:29.262675 kubelet[1603]: I1101 00:42:29.262643 1603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:29.263417 kubelet[1603]: I1101 00:42:29.263398 1603 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:42:29.263417 kubelet[1603]: I1101 00:42:29.263411 1603 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:42:29.276137 kubelet[1603]: I1101 00:42:29.276060 1603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:29.277143 kubelet[1603]: I1101 00:42:29.277103 1603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:29.277143 kubelet[1603]: I1101 00:42:29.277134 1603 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:42:29.277222 kubelet[1603]: I1101 00:42:29.277158 1603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:29.277222 kubelet[1603]: I1101 00:42:29.277169 1603 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:42:29.277286 kubelet[1603]: E1101 00:42:29.277218 1603 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:29.279266 kubelet[1603]: E1101 00:42:29.279202 1603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Nov 1 00:42:29.281905 kubelet[1603]: E1101 00:42:29.280682 1603 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bb3cd980b585 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:42:29.253518725 +0000 UTC m=+2.494960092,LastTimestamp:2025-11-01 00:42:29.253518725 +0000 UTC m=+2.494960092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:42:29.282836 kubelet[1603]: E1101 00:42:29.282795 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:42:29.288568 kubelet[1603]: I1101 00:42:29.288532 1603 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:29.288568 kubelet[1603]: I1101 00:42:29.288559 1603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:29.288668 kubelet[1603]: I1101 00:42:29.288578 1603 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:29.308980 kubelet[1603]: I1101 00:42:29.308899 1603 policy_none.go:49] "None policy: Start" Nov 1 00:42:29.308980 kubelet[1603]: I1101 00:42:29.308953 1603 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:29.308980 kubelet[1603]: I1101 00:42:29.308983 1603 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:29.329388 systemd[1]: Created slice kubepods.slice. Nov 1 00:42:29.335400 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:42:29.338665 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:42:29.347548 kubelet[1603]: E1101 00:42:29.347473 1603 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:42:29.347848 kubelet[1603]: I1101 00:42:29.347693 1603 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:29.347848 kubelet[1603]: I1101 00:42:29.347705 1603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:29.348799 kubelet[1603]: I1101 00:42:29.347975 1603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:29.349184 kubelet[1603]: E1101 00:42:29.349158 1603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:29.349298 kubelet[1603]: E1101 00:42:29.349277 1603 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:42:29.399731 systemd[1]: Created slice kubepods-burstable-podbb6d5563a512127e37839c892831164b.slice. Nov 1 00:42:29.413323 kubelet[1603]: E1101 00:42:29.413240 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:29.424644 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 1 00:42:29.426316 kubelet[1603]: E1101 00:42:29.426259 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:29.448948 kubelet[1603]: I1101 00:42:29.448907 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:29.449413 kubelet[1603]: E1101 00:42:29.449374 1603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 1 00:42:29.460044 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 1 00:42:29.461631 kubelet[1603]: E1101 00:42:29.461580 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:29.480318 kubelet[1603]: E1101 00:42:29.480244 1603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Nov 1 00:42:29.563868 kubelet[1603]: I1101 00:42:29.563784 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:29.563868 kubelet[1603]: I1101 00:42:29.563855 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:29.563868 kubelet[1603]: I1101 00:42:29.563878 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:29.563868 kubelet[1603]: I1101 00:42:29.563897 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:29.564246 kubelet[1603]: I1101 00:42:29.563915 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:29.564246 kubelet[1603]: I1101 00:42:29.563932 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:29.564246 kubelet[1603]: I1101 00:42:29.563950 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:29.564246 kubelet[1603]: I1101 00:42:29.563998 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:29.564246 kubelet[1603]: I1101 00:42:29.564036 1603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:29.652210 kubelet[1603]: I1101 00:42:29.652078 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:29.652607 kubelet[1603]: E1101 00:42:29.652565 1603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 1 00:42:29.714522 kubelet[1603]: E1101 00:42:29.714453 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:29.715430 env[1205]: time="2025-11-01T00:42:29.715376528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bb6d5563a512127e37839c892831164b,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:29.727697 kubelet[1603]: E1101 00:42:29.727648 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:29.728409 env[1205]: time="2025-11-01T00:42:29.728309887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:29.762838 kubelet[1603]: E1101 00:42:29.762780 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:29.763478 env[1205]: time="2025-11-01T00:42:29.763432098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:29.881854 kubelet[1603]: E1101 00:42:29.881795 1603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Nov 1 00:42:30.039126 kubelet[1603]: E1101 00:42:30.038895 1603 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bb3cd980b585 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:42:29.253518725 +0000 UTC m=+2.494960092,LastTimestamp:2025-11-01 00:42:29.253518725 +0000 UTC m=+2.494960092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:42:30.054419 kubelet[1603]: I1101 00:42:30.054369 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:30.055009 kubelet[1603]: E1101 00:42:30.054964 1603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 1 00:42:30.506798 kubelet[1603]: E1101 00:42:30.506724 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:42:30.587440 kubelet[1603]: E1101 00:42:30.587341 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:42:30.682999 kubelet[1603]: E1101 00:42:30.682940 1603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Nov 1 00:42:30.744093 kubelet[1603]: E1101 00:42:30.744018 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:42:30.772844 kubelet[1603]: E1101 00:42:30.772678 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:42:30.857519 kubelet[1603]: I1101 00:42:30.857462 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:30.857904 kubelet[1603]: E1101 00:42:30.857875 1603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 1 00:42:31.197411 kubelet[1603]: E1101 00:42:31.197340 1603 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:42:31.206927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043375327.mount: Deactivated successfully. Nov 1 00:42:31.496954 env[1205]: time="2025-11-01T00:42:31.496794978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.570782 env[1205]: time="2025-11-01T00:42:31.570738799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.669593 env[1205]: time="2025-11-01T00:42:31.669521301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.676343 env[1205]: time="2025-11-01T00:42:31.676292855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.702007 env[1205]: time="2025-11-01T00:42:31.701966011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.731005 env[1205]: time="2025-11-01T00:42:31.730952579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.736441 env[1205]: time="2025-11-01T00:42:31.736386227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.748930 env[1205]: time="2025-11-01T00:42:31.748773913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.752333 env[1205]: time="2025-11-01T00:42:31.752281765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.769403 env[1205]: time="2025-11-01T00:42:31.769368941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.778726 env[1205]: time="2025-11-01T00:42:31.778685290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.793960 env[1205]: time="2025-11-01T00:42:31.793904300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.913043 env[1205]: time="2025-11-01T00:42:31.912920379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.913043 env[1205]: time="2025-11-01T00:42:31.912991524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.913043 env[1205]: time="2025-11-01T00:42:31.913017248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.913360 env[1205]: time="2025-11-01T00:42:31.913289707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a100b62f8c9be08b678176f6881fff1133dd835c69d1f81760db6165d652a45c pid=1650 runtime=io.containerd.runc.v2 Nov 1 00:42:31.962608 env[1205]: time="2025-11-01T00:42:31.962512628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.962608 env[1205]: time="2025-11-01T00:42:31.962553471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.962608 env[1205]: time="2025-11-01T00:42:31.962563131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.962836 env[1205]: time="2025-11-01T00:42:31.962686054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d1de12db86d33e29c14ec4f4dc75e4791fa79c3258daaa2e1a6fe782effaeb3 pid=1668 runtime=io.containerd.runc.v2 Nov 1 00:42:31.985561 env[1205]: time="2025-11-01T00:42:31.985433742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.985779 env[1205]: time="2025-11-01T00:42:31.985522363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.985779 env[1205]: time="2025-11-01T00:42:31.985547205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.985779 env[1205]: time="2025-11-01T00:42:31.985677472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf46ddec0a48adac36a6026b35b2c3899642aa27a62c330cff04a984e817e855 pid=1691 runtime=io.containerd.runc.v2 Nov 1 00:42:31.994574 systemd[1]: Started cri-containerd-6d1de12db86d33e29c14ec4f4dc75e4791fa79c3258daaa2e1a6fe782effaeb3.scope. Nov 1 00:42:32.024845 systemd[1]: Started cri-containerd-bf46ddec0a48adac36a6026b35b2c3899642aa27a62c330cff04a984e817e855.scope. Nov 1 00:42:32.180010 systemd[1]: Started cri-containerd-a100b62f8c9be08b678176f6881fff1133dd835c69d1f81760db6165d652a45c.scope. Nov 1 00:42:32.285773 kubelet[1603]: E1101 00:42:32.284309 1603 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="3.2s" Nov 1 00:42:32.434277 env[1205]: time="2025-11-01T00:42:32.434208586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf46ddec0a48adac36a6026b35b2c3899642aa27a62c330cff04a984e817e855\"" Nov 1 00:42:32.436145 kubelet[1603]: E1101 00:42:32.436094 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:32.460140 kubelet[1603]: I1101 00:42:32.459693 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:32.460140 kubelet[1603]: E1101 00:42:32.460045 1603 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Nov 1 00:42:32.464901 env[1205]: time="2025-11-01T00:42:32.464854413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bb6d5563a512127e37839c892831164b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d1de12db86d33e29c14ec4f4dc75e4791fa79c3258daaa2e1a6fe782effaeb3\"" Nov 1 00:42:32.465949 kubelet[1603]: E1101 00:42:32.465703 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:32.476161 env[1205]: time="2025-11-01T00:42:32.476087560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a100b62f8c9be08b678176f6881fff1133dd835c69d1f81760db6165d652a45c\"" Nov 1 00:42:32.477365 kubelet[1603]: E1101 00:42:32.477319 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:32.547999 kubelet[1603]: E1101 00:42:32.547850 1603 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:42:32.610619 env[1205]: time="2025-11-01T00:42:32.610552160Z" level=info msg="CreateContainer within sandbox \"bf46ddec0a48adac36a6026b35b2c3899642aa27a62c330cff04a984e817e855\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:42:32.614000 env[1205]: time="2025-11-01T00:42:32.613954520Z" level=info msg="CreateContainer within sandbox \"6d1de12db86d33e29c14ec4f4dc75e4791fa79c3258daaa2e1a6fe782effaeb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:42:32.617857 env[1205]: time="2025-11-01T00:42:32.617747950Z" level=info msg="CreateContainer within sandbox \"a100b62f8c9be08b678176f6881fff1133dd835c69d1f81760db6165d652a45c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:42:32.646811 env[1205]: time="2025-11-01T00:42:32.646732739Z" level=info msg="CreateContainer within sandbox \"bf46ddec0a48adac36a6026b35b2c3899642aa27a62c330cff04a984e817e855\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f74af703203b0509c1ad978e5a8ac4e6277673588e0f0d4d6795b8d7d9e3491\"" Nov 1 00:42:32.647702 env[1205]: time="2025-11-01T00:42:32.647670738Z" level=info msg="StartContainer for \"2f74af703203b0509c1ad978e5a8ac4e6277673588e0f0d4d6795b8d7d9e3491\"" Nov 1 00:42:32.659445 env[1205]: time="2025-11-01T00:42:32.659383627Z" level=info msg="CreateContainer within sandbox \"6d1de12db86d33e29c14ec4f4dc75e4791fa79c3258daaa2e1a6fe782effaeb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"380c36c1906ed44124eb5615218744d4d5bfbd1028e8a10c4e17f641861e8e18\"" Nov 1 00:42:32.660208 env[1205]: time="2025-11-01T00:42:32.660187962Z" level=info msg="StartContainer for \"380c36c1906ed44124eb5615218744d4d5bfbd1028e8a10c4e17f641861e8e18\"" Nov 1 00:42:32.660731 env[1205]: time="2025-11-01T00:42:32.660691002Z" level=info msg="CreateContainer within sandbox \"a100b62f8c9be08b678176f6881fff1133dd835c69d1f81760db6165d652a45c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8cd6b354488a2d155bad7ddf96cda8a652f91820a87c8576d4ae8949cbefc434\"" Nov 1 00:42:32.661385 env[1205]: time="2025-11-01T00:42:32.661321531Z" level=info msg="StartContainer for \"8cd6b354488a2d155bad7ddf96cda8a652f91820a87c8576d4ae8949cbefc434\"" Nov 1 00:42:32.667187 systemd[1]: Started cri-containerd-2f74af703203b0509c1ad978e5a8ac4e6277673588e0f0d4d6795b8d7d9e3491.scope. Nov 1 00:42:32.690430 systemd[1]: Started cri-containerd-380c36c1906ed44124eb5615218744d4d5bfbd1028e8a10c4e17f641861e8e18.scope. Nov 1 00:42:32.691340 systemd[1]: Started cri-containerd-8cd6b354488a2d155bad7ddf96cda8a652f91820a87c8576d4ae8949cbefc434.scope. Nov 1 00:42:32.736261 env[1205]: time="2025-11-01T00:42:32.736196310Z" level=info msg="StartContainer for \"2f74af703203b0509c1ad978e5a8ac4e6277673588e0f0d4d6795b8d7d9e3491\" returns successfully" Nov 1 00:42:32.761066 env[1205]: time="2025-11-01T00:42:32.760995025Z" level=info msg="StartContainer for \"8cd6b354488a2d155bad7ddf96cda8a652f91820a87c8576d4ae8949cbefc434\" returns successfully" Nov 1 00:42:32.761722 env[1205]: time="2025-11-01T00:42:32.761689527Z" level=info msg="StartContainer for \"380c36c1906ed44124eb5615218744d4d5bfbd1028e8a10c4e17f641861e8e18\" returns successfully" Nov 1 00:42:33.292124 kubelet[1603]: E1101 00:42:33.292080 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:33.292563 kubelet[1603]: E1101 00:42:33.292232 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:33.294217 kubelet[1603]: E1101 00:42:33.294191 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:33.294325 kubelet[1603]: E1101 00:42:33.294298 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:33.296221 kubelet[1603]: E1101 00:42:33.295918 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:33.296221 kubelet[1603]: E1101 00:42:33.296024 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:34.298977 kubelet[1603]: E1101 00:42:34.298928 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:34.299465 kubelet[1603]: E1101 00:42:34.299082 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:34.299465 kubelet[1603]: E1101 00:42:34.299303 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:34.299465 kubelet[1603]: E1101 00:42:34.299445 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:35.299980 kubelet[1603]: E1101 00:42:35.299773 1603 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:42:35.299980 kubelet[1603]: E1101 00:42:35.299882 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:35.661499 kubelet[1603]: I1101 00:42:35.661450 1603 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:36.050250 kubelet[1603]: E1101 00:42:36.050117 1603 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:42:36.265709 kubelet[1603]: I1101 00:42:36.265629 1603 apiserver.go:52] "Watching apiserver" Nov 1 00:42:36.462534 kubelet[1603]: I1101 00:42:36.462468 1603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:36.520728 kubelet[1603]: I1101 00:42:36.520669 1603 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:42:36.563631 kubelet[1603]: I1101 00:42:36.563577 1603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:36.569954 kubelet[1603]: E1101 00:42:36.569888 1603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:36.569954 kubelet[1603]: I1101 00:42:36.569945 1603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:36.571845 kubelet[1603]: E1101 00:42:36.571808 1603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:36.571845 kubelet[1603]: I1101 00:42:36.571831 1603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:36.573850 kubelet[1603]: E1101 00:42:36.573814 1603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:36.928061 kubelet[1603]: I1101 00:42:36.928012 1603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:36.930304 kubelet[1603]: E1101 00:42:36.930275 1603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:36.930508 kubelet[1603]: E1101 00:42:36.930492 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:37.510747 kubelet[1603]: I1101 00:42:37.510699 1603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:37.515923 kubelet[1603]: E1101 00:42:37.515877 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:38.095175 systemd[1]: Reloading. Nov 1 00:42:38.161486 /usr/lib/systemd/system-generators/torcx-generator[1915]: time="2025-11-01T00:42:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:38.161530 /usr/lib/systemd/system-generators/torcx-generator[1915]: time="2025-11-01T00:42:38Z" level=info msg="torcx already run" Nov 1 00:42:38.230154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:38.230174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:38.248009 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:38.304532 kubelet[1603]: E1101 00:42:38.304476 1603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:38.351772 systemd[1]: Stopping kubelet.service... Nov 1 00:42:38.375910 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:42:38.376169 systemd[1]: Stopped kubelet.service. Nov 1 00:42:38.378559 systemd[1]: Starting kubelet.service... Nov 1 00:42:38.475807 systemd[1]: Started kubelet.service. Nov 1 00:42:38.514268 kubelet[1960]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:38.514268 kubelet[1960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:38.514268 kubelet[1960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:38.514734 kubelet[1960]: I1101 00:42:38.514315 1960 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:38.523206 kubelet[1960]: I1101 00:42:38.523167 1960 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:42:38.523423 kubelet[1960]: I1101 00:42:38.523390 1960 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:38.523690 kubelet[1960]: I1101 00:42:38.523673 1960 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:42:38.524751 kubelet[1960]: I1101 00:42:38.524732 1960 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:42:38.526647 kubelet[1960]: I1101 00:42:38.526628 1960 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:38.529565 kubelet[1960]: E1101 00:42:38.529513 1960 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:38.529615 kubelet[1960]: I1101 00:42:38.529566 1960 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:38.534018 kubelet[1960]: I1101 00:42:38.533993 1960 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:38.534206 kubelet[1960]: I1101 00:42:38.534173 1960 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:38.534372 kubelet[1960]: I1101 00:42:38.534198 1960 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:42:38.534474 kubelet[1960]: I1101 00:42:38.534381 1960 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:38.534474 kubelet[1960]: I1101 00:42:38.534391 1960 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:42:38.534474 kubelet[1960]: I1101 00:42:38.534435 1960 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:38.534593 kubelet[1960]: I1101 00:42:38.534579 1960 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:42:38.534593 kubelet[1960]: I1101 00:42:38.534593 1960 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:38.534641 kubelet[1960]: I1101 00:42:38.534627 1960 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:42:38.534641 kubelet[1960]: I1101 00:42:38.534642 1960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:38.536949 kubelet[1960]: I1101 00:42:38.536899 1960 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:38.537637 kubelet[1960]: I1101 00:42:38.537617 1960 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:42:38.543759 kubelet[1960]: I1101 00:42:38.543738 1960 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:38.543850 kubelet[1960]: I1101 00:42:38.543780 1960 server.go:1289] "Started kubelet" Nov 1 00:42:38.545204 kubelet[1960]: I1101 00:42:38.544750 1960 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:38.545293 kubelet[1960]: I1101 00:42:38.545214 1960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:38.545584 kubelet[1960]: I1101 00:42:38.545559 1960 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:38.545775 kubelet[1960]: I1101 00:42:38.545692 1960 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:38.546688 kubelet[1960]: I1101 00:42:38.546663 1960 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:42:38.549480 kubelet[1960]: I1101 00:42:38.549435 1960 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:38.550958 kubelet[1960]: I1101 00:42:38.550920 1960 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:38.551046 kubelet[1960]: I1101 00:42:38.550996 1960 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:42:38.551123 kubelet[1960]: E1101 00:42:38.551045 1960 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:42:38.551123 kubelet[1960]: I1101 00:42:38.551101 1960 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:38.551475 kubelet[1960]: I1101 00:42:38.551451 1960 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:38.551604 kubelet[1960]: I1101 00:42:38.551582 1960 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:38.555664 kubelet[1960]: E1101 00:42:38.555586 1960 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:38.556171 kubelet[1960]: I1101 00:42:38.556119 1960 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:42:38.567560 kubelet[1960]: I1101 00:42:38.567504 1960 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:38.570995 kubelet[1960]: I1101 00:42:38.570956 1960 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:38.571133 kubelet[1960]: I1101 00:42:38.571095 1960 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:42:38.571133 kubelet[1960]: I1101 00:42:38.571125 1960 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:38.571133 kubelet[1960]: I1101 00:42:38.571132 1960 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:42:38.571365 kubelet[1960]: E1101 00:42:38.571228 1960 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:38.588478 kubelet[1960]: I1101 00:42:38.588437 1960 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:38.588478 kubelet[1960]: I1101 00:42:38.588453 1960 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:38.588478 kubelet[1960]: I1101 00:42:38.588470 1960 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:38.588693 kubelet[1960]: I1101 00:42:38.588594 1960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:42:38.588693 kubelet[1960]: I1101 00:42:38.588603 1960 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:42:38.588693 kubelet[1960]: I1101 00:42:38.588618 1960 policy_none.go:49] "None policy: Start" Nov 1 00:42:38.588693 kubelet[1960]: I1101 00:42:38.588626 1960 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:38.588693 kubelet[1960]: I1101 00:42:38.588634 1960 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:38.588802 kubelet[1960]: I1101 00:42:38.588711 1960 state_mem.go:75] "Updated machine memory state" Nov 1 00:42:38.592076 kubelet[1960]: E1101 00:42:38.592049 1960 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:42:38.592237 kubelet[1960]: I1101 00:42:38.592206 1960 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:38.592296 kubelet[1960]: I1101 00:42:38.592251 1960 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:38.592517 kubelet[1960]: I1101 00:42:38.592490 1960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:38.593785 kubelet[1960]: E1101 00:42:38.593612 1960 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:38.672942 kubelet[1960]: I1101 00:42:38.672797 1960 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.672942 kubelet[1960]: I1101 00:42:38.672836 1960 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:38.673171 kubelet[1960]: I1101 00:42:38.672797 1960 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:38.683995 kubelet[1960]: E1101 00:42:38.683945 1960 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:38.697608 kubelet[1960]: I1101 00:42:38.697572 1960 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:42:38.703580 kubelet[1960]: I1101 00:42:38.703523 1960 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:42:38.703769 kubelet[1960]: I1101 00:42:38.703626 1960 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:42:38.852390 kubelet[1960]: I1101 00:42:38.852298 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:42:38.852390 kubelet[1960]: I1101 00:42:38.852342 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:38.852390 kubelet[1960]: I1101 00:42:38.852389 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:38.852390 kubelet[1960]: I1101 00:42:38.852406 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb6d5563a512127e37839c892831164b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bb6d5563a512127e37839c892831164b\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:38.852677 kubelet[1960]: I1101 00:42:38.852422 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.852677 kubelet[1960]: I1101 00:42:38.852484 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.852677 kubelet[1960]: I1101 00:42:38.852499 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.852677 kubelet[1960]: I1101 00:42:38.852514 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.852677 kubelet[1960]: I1101 00:42:38.852541 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:42:38.983924 kubelet[1960]: E1101 00:42:38.983774 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:38.983924 kubelet[1960]: E1101 00:42:38.983886 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:38.984790 kubelet[1960]: E1101 00:42:38.984760 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:39.090127 sudo[2001]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:42:39.090325 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:42:39.535874 kubelet[1960]: I1101 00:42:39.535817 1960 apiserver.go:52] "Watching apiserver" Nov 1 00:42:39.552438 kubelet[1960]: I1101 00:42:39.552383 1960 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:39.580422 kubelet[1960]: I1101 00:42:39.580389 1960 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:39.580786 kubelet[1960]: E1101 00:42:39.580516 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:39.580839 kubelet[1960]: E1101 00:42:39.580587 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:39.583338 sudo[2001]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:39.587850 kubelet[1960]: E1101 00:42:39.587814 1960 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:42:39.593203 kubelet[1960]: E1101 00:42:39.593148 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:39.653396 kubelet[1960]: I1101 00:42:39.653208 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6531495189999998 podStartE2EDuration="1.653149519s" podCreationTimestamp="2025-11-01 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:39.652969779 +0000 UTC m=+1.173654510" watchObservedRunningTime="2025-11-01 00:42:39.653149519 +0000 UTC m=+1.173834250" Nov 1 00:42:39.675333 kubelet[1960]: I1101 00:42:39.675256 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.67523657 podStartE2EDuration="1.67523657s" podCreationTimestamp="2025-11-01 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:39.665939017 +0000 UTC m=+1.186623748" watchObservedRunningTime="2025-11-01 00:42:39.67523657 +0000 UTC m=+1.195921301" Nov 1 00:42:40.581983 kubelet[1960]: E1101 00:42:40.581916 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:40.582681 kubelet[1960]: E1101 00:42:40.582655 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:41.422044 sudo[1301]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:41.424509 sshd[1298]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:41.427616 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:39596.service: Deactivated successfully. Nov 1 00:42:41.428441 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:42:41.428609 systemd[1]: session-5.scope: Consumed 6.778s CPU time. Nov 1 00:42:41.429103 systemd-logind[1194]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:42:41.429897 systemd-logind[1194]: Removed session 5. Nov 1 00:42:41.583518 kubelet[1960]: E1101 00:42:41.583466 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:44.257785 kubelet[1960]: I1101 00:42:44.257516 1960 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:42:44.258332 kubelet[1960]: I1101 00:42:44.258090 1960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:42:44.258415 env[1205]: time="2025-11-01T00:42:44.257909424Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:42:45.325166 kubelet[1960]: E1101 00:42:45.325119 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.327449 kubelet[1960]: I1101 00:42:45.327382 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.32737032 podStartE2EDuration="8.32737032s" podCreationTimestamp="2025-11-01 00:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:39.676109919 +0000 UTC m=+1.196794660" watchObservedRunningTime="2025-11-01 00:42:45.32737032 +0000 UTC m=+6.848055051" Nov 1 00:42:45.402791 systemd[1]: Created slice kubepods-besteffort-pod77941bc4_aee3_4fd7_bbab_1093ed5f4443.slice. Nov 1 00:42:45.423940 systemd[1]: Created slice kubepods-burstable-poda02643a7_f7f3_447b_8eca_ff1c75038e9e.slice. Nov 1 00:42:45.462371 systemd[1]: Created slice kubepods-besteffort-pod0e91980c_3f1d_4d0b_b726_3dcf4fe7ba71.slice. Nov 1 00:42:45.513370 kubelet[1960]: I1101 00:42:45.513284 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xcf5c\" (UID: \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\") " pod="kube-system/cilium-operator-6c4d7847fc-xcf5c" Nov 1 00:42:45.513370 kubelet[1960]: I1101 00:42:45.513364 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-cgroup\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513406 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dwm4\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-kube-api-access-8dwm4\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513455 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77941bc4-aee3-4fd7-bbab-1093ed5f4443-kube-proxy\") pod \"kube-proxy-796kv\" (UID: \"77941bc4-aee3-4fd7-bbab-1093ed5f4443\") " pod="kube-system/kube-proxy-796kv" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513476 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-bpf-maps\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513495 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-xtables-lock\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513526 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a02643a7-f7f3-447b-8eca-ff1c75038e9e-clustermesh-secrets\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513630 kubelet[1960]: I1101 00:42:45.513547 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-lib-modules\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513832 kubelet[1960]: I1101 00:42:45.513568 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77941bc4-aee3-4fd7-bbab-1093ed5f4443-xtables-lock\") pod \"kube-proxy-796kv\" (UID: \"77941bc4-aee3-4fd7-bbab-1093ed5f4443\") " pod="kube-system/kube-proxy-796kv" Nov 1 00:42:45.513832 kubelet[1960]: I1101 00:42:45.513594 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwb95\" (UniqueName: \"kubernetes.io/projected/77941bc4-aee3-4fd7-bbab-1093ed5f4443-kube-api-access-qwb95\") pod \"kube-proxy-796kv\" (UID: \"77941bc4-aee3-4fd7-bbab-1093ed5f4443\") " pod="kube-system/kube-proxy-796kv" Nov 1 00:42:45.513832 kubelet[1960]: I1101 00:42:45.513615 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfph\" (UniqueName: \"kubernetes.io/projected/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-kube-api-access-wbfph\") pod \"cilium-operator-6c4d7847fc-xcf5c\" (UID: \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\") " pod="kube-system/cilium-operator-6c4d7847fc-xcf5c" Nov 1 00:42:45.513832 kubelet[1960]: I1101 00:42:45.513652 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77941bc4-aee3-4fd7-bbab-1093ed5f4443-lib-modules\") pod \"kube-proxy-796kv\" (UID: \"77941bc4-aee3-4fd7-bbab-1093ed5f4443\") " pod="kube-system/kube-proxy-796kv" Nov 1 00:42:45.513832 kubelet[1960]: I1101 00:42:45.513673 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-run\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513691 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hostproc\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513714 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-config-path\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513746 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-net\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513764 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hubble-tls\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513785 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-etc-cni-netd\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.513994 kubelet[1960]: I1101 00:42:45.513806 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-kernel\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.514225 kubelet[1960]: I1101 00:42:45.513824 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cni-path\") pod \"cilium-v8mz7\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " pod="kube-system/cilium-v8mz7" Nov 1 00:42:45.591967 kubelet[1960]: E1101 00:42:45.591818 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.615457 kubelet[1960]: I1101 00:42:45.615404 1960 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:42:45.719071 kubelet[1960]: E1101 00:42:45.719003 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.719923 env[1205]: time="2025-11-01T00:42:45.719846729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-796kv,Uid:77941bc4-aee3-4fd7-bbab-1093ed5f4443,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:45.730226 kubelet[1960]: E1101 00:42:45.730190 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.730731 env[1205]: time="2025-11-01T00:42:45.730685399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8mz7,Uid:a02643a7-f7f3-447b-8eca-ff1c75038e9e,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:45.748377 env[1205]: time="2025-11-01T00:42:45.748267075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.748377 env[1205]: time="2025-11-01T00:42:45.748307255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.748377 env[1205]: time="2025-11-01T00:42:45.748317836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.748778 env[1205]: time="2025-11-01T00:42:45.748692617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c583f0e82ea4d9c0696cf5c8da351af0dbc3ea6ae94e49c998ada005cc4b7eb9 pid=2061 runtime=io.containerd.runc.v2 Nov 1 00:42:45.752197 env[1205]: time="2025-11-01T00:42:45.752112375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.752394 env[1205]: time="2025-11-01T00:42:45.752169789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.752394 env[1205]: time="2025-11-01T00:42:45.752181492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.752491 env[1205]: time="2025-11-01T00:42:45.752393762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6 pid=2076 runtime=io.containerd.runc.v2 Nov 1 00:42:45.762309 systemd[1]: Started cri-containerd-c583f0e82ea4d9c0696cf5c8da351af0dbc3ea6ae94e49c998ada005cc4b7eb9.scope. Nov 1 00:42:45.768117 kubelet[1960]: E1101 00:42:45.768079 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.770407 env[1205]: time="2025-11-01T00:42:45.770357283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xcf5c,Uid:0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:45.773437 systemd[1]: Started cri-containerd-b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6.scope. Nov 1 00:42:45.792001 env[1205]: time="2025-11-01T00:42:45.791917783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-796kv,Uid:77941bc4-aee3-4fd7-bbab-1093ed5f4443,Namespace:kube-system,Attempt:0,} returns sandbox id \"c583f0e82ea4d9c0696cf5c8da351af0dbc3ea6ae94e49c998ada005cc4b7eb9\"" Nov 1 00:42:45.793671 kubelet[1960]: E1101 00:42:45.793649 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.801222 env[1205]: time="2025-11-01T00:42:45.800512885Z" level=info msg="CreateContainer within sandbox \"c583f0e82ea4d9c0696cf5c8da351af0dbc3ea6ae94e49c998ada005cc4b7eb9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:42:45.807978 env[1205]: time="2025-11-01T00:42:45.807921978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8mz7,Uid:a02643a7-f7f3-447b-8eca-ff1c75038e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\"" Nov 1 00:42:45.809708 kubelet[1960]: E1101 00:42:45.808953 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.809903 env[1205]: time="2025-11-01T00:42:45.809853530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:42:45.813608 env[1205]: time="2025-11-01T00:42:45.813526950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.813608 env[1205]: time="2025-11-01T00:42:45.813580126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.814416 env[1205]: time="2025-11-01T00:42:45.813593262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.815121 env[1205]: time="2025-11-01T00:42:45.814988022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec pid=2139 runtime=io.containerd.runc.v2 Nov 1 00:42:45.829718 env[1205]: time="2025-11-01T00:42:45.829644809Z" level=info msg="CreateContainer within sandbox \"c583f0e82ea4d9c0696cf5c8da351af0dbc3ea6ae94e49c998ada005cc4b7eb9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a3d914897f25cf45242f3ef9ea10401ec6812f5b13821cae3bb2b35addc69c9\"" Nov 1 00:42:45.831955 env[1205]: time="2025-11-01T00:42:45.831915212Z" level=info msg="StartContainer for \"4a3d914897f25cf45242f3ef9ea10401ec6812f5b13821cae3bb2b35addc69c9\"" Nov 1 00:42:45.835794 systemd[1]: Started cri-containerd-c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec.scope. Nov 1 00:42:45.857015 systemd[1]: Started cri-containerd-4a3d914897f25cf45242f3ef9ea10401ec6812f5b13821cae3bb2b35addc69c9.scope. Nov 1 00:42:45.879494 env[1205]: time="2025-11-01T00:42:45.879401893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xcf5c,Uid:0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\"" Nov 1 00:42:45.881466 kubelet[1960]: E1101 00:42:45.881428 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:45.927491 env[1205]: time="2025-11-01T00:42:45.927404826Z" level=info msg="StartContainer for \"4a3d914897f25cf45242f3ef9ea10401ec6812f5b13821cae3bb2b35addc69c9\" returns successfully" Nov 1 00:42:46.596409 kubelet[1960]: E1101 00:42:46.596372 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:48.013836 kubelet[1960]: E1101 00:42:48.013793 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:48.029247 kubelet[1960]: I1101 00:42:48.029001 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-796kv" podStartSLOduration=3.028977227 podStartE2EDuration="3.028977227s" podCreationTimestamp="2025-11-01 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:46.607495594 +0000 UTC m=+8.128180325" watchObservedRunningTime="2025-11-01 00:42:48.028977227 +0000 UTC m=+9.549661968" Nov 1 00:42:48.598863 kubelet[1960]: E1101 00:42:48.598827 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:50.697964 kubelet[1960]: E1101 00:42:50.697901 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:51.612971 kubelet[1960]: E1101 00:42:51.612922 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:54.332330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624481689.mount: Deactivated successfully. Nov 1 00:42:58.492260 env[1205]: time="2025-11-01T00:42:58.492159948Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:58.494537 env[1205]: time="2025-11-01T00:42:58.494481385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:58.497927 env[1205]: time="2025-11-01T00:42:58.497833693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:58.498542 env[1205]: time="2025-11-01T00:42:58.498500323Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:42:58.500063 env[1205]: time="2025-11-01T00:42:58.500009497Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:42:58.504688 env[1205]: time="2025-11-01T00:42:58.504630169Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:42:58.520909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901467684.mount: Deactivated successfully. Nov 1 00:42:58.521763 env[1205]: time="2025-11-01T00:42:58.521692798Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\"" Nov 1 00:42:58.522734 env[1205]: time="2025-11-01T00:42:58.522679994Z" level=info msg="StartContainer for \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\"" Nov 1 00:42:58.546603 systemd[1]: Started cri-containerd-1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b.scope. Nov 1 00:42:58.552736 systemd[1]: run-containerd-runc-k8s.io-1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b-runc.qbOWE8.mount: Deactivated successfully. Nov 1 00:42:58.594666 env[1205]: time="2025-11-01T00:42:58.594613622Z" level=info msg="StartContainer for \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\" returns successfully" Nov 1 00:42:58.606785 systemd[1]: cri-containerd-1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b.scope: Deactivated successfully. Nov 1 00:42:58.632122 kubelet[1960]: E1101 00:42:58.632071 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:59.256668 env[1205]: time="2025-11-01T00:42:59.256597064Z" level=info msg="shim disconnected" id=1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b Nov 1 00:42:59.256668 env[1205]: time="2025-11-01T00:42:59.256657742Z" level=warning msg="cleaning up after shim disconnected" id=1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b namespace=k8s.io Nov 1 00:42:59.256668 env[1205]: time="2025-11-01T00:42:59.256669946Z" level=info msg="cleaning up dead shim" Nov 1 00:42:59.266894 env[1205]: time="2025-11-01T00:42:59.266820620Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2395 runtime=io.containerd.runc.v2\n" Nov 1 00:42:59.518390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b-rootfs.mount: Deactivated successfully. Nov 1 00:42:59.635431 kubelet[1960]: E1101 00:42:59.635317 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:42:59.643823 env[1205]: time="2025-11-01T00:42:59.643767475Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:42:59.667150 env[1205]: time="2025-11-01T00:42:59.667081324Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\"" Nov 1 00:42:59.667827 env[1205]: time="2025-11-01T00:42:59.667783322Z" level=info msg="StartContainer for \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\"" Nov 1 00:42:59.689466 systemd[1]: Started cri-containerd-adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7.scope. Nov 1 00:42:59.724029 env[1205]: time="2025-11-01T00:42:59.723955173Z" level=info msg="StartContainer for \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\" returns successfully" Nov 1 00:42:59.735338 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:59.735709 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:59.735935 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:42:59.738054 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:59.739754 systemd[1]: cri-containerd-adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7.scope: Deactivated successfully. Nov 1 00:42:59.746774 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:59.769440 env[1205]: time="2025-11-01T00:42:59.769245684Z" level=info msg="shim disconnected" id=adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7 Nov 1 00:42:59.769440 env[1205]: time="2025-11-01T00:42:59.769318556Z" level=warning msg="cleaning up after shim disconnected" id=adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7 namespace=k8s.io Nov 1 00:42:59.769440 env[1205]: time="2025-11-01T00:42:59.769331030Z" level=info msg="cleaning up dead shim" Nov 1 00:42:59.776142 env[1205]: time="2025-11-01T00:42:59.776067411Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Nov 1 00:43:00.516822 systemd[1]: run-containerd-runc-k8s.io-adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7-runc.wR6gnG.mount: Deactivated successfully. Nov 1 00:43:00.516945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7-rootfs.mount: Deactivated successfully. Nov 1 00:43:00.638309 kubelet[1960]: E1101 00:43:00.638273 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:00.648841 env[1205]: time="2025-11-01T00:43:00.648792357Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:43:00.662388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104556180.mount: Deactivated successfully. Nov 1 00:43:00.700177 env[1205]: time="2025-11-01T00:43:00.700117245Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\"" Nov 1 00:43:00.700704 env[1205]: time="2025-11-01T00:43:00.700662527Z" level=info msg="StartContainer for \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\"" Nov 1 00:43:00.725288 systemd[1]: Started cri-containerd-89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9.scope. Nov 1 00:43:00.760646 systemd[1]: cri-containerd-89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9.scope: Deactivated successfully. Nov 1 00:43:00.761761 env[1205]: time="2025-11-01T00:43:00.761672869Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02643a7_f7f3_447b_8eca_ff1c75038e9e.slice/cri-containerd-89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9.scope/memory.events\": no such file or directory" Nov 1 00:43:00.766423 env[1205]: time="2025-11-01T00:43:00.766385659Z" level=info msg="StartContainer for \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\" returns successfully" Nov 1 00:43:00.882687 env[1205]: time="2025-11-01T00:43:00.882614822Z" level=info msg="shim disconnected" id=89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9 Nov 1 00:43:00.882687 env[1205]: time="2025-11-01T00:43:00.882674689Z" level=warning msg="cleaning up after shim disconnected" id=89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9 namespace=k8s.io Nov 1 00:43:00.882687 env[1205]: time="2025-11-01T00:43:00.882687414Z" level=info msg="cleaning up dead shim" Nov 1 00:43:00.890185 env[1205]: time="2025-11-01T00:43:00.890126082Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2516 runtime=io.containerd.runc.v2\n" Nov 1 00:43:01.170950 env[1205]: time="2025-11-01T00:43:01.170784468Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:01.173064 env[1205]: time="2025-11-01T00:43:01.173022143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:01.174704 env[1205]: time="2025-11-01T00:43:01.174655233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:01.175396 env[1205]: time="2025-11-01T00:43:01.175331208Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:43:01.181190 env[1205]: time="2025-11-01T00:43:01.180881863Z" level=info msg="CreateContainer within sandbox \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:43:01.200478 env[1205]: time="2025-11-01T00:43:01.200393382Z" level=info msg="CreateContainer within sandbox \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\"" Nov 1 00:43:01.201402 env[1205]: time="2025-11-01T00:43:01.201138202Z" level=info msg="StartContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\"" Nov 1 00:43:01.221899 systemd[1]: Started cri-containerd-2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f.scope. Nov 1 00:43:01.252879 env[1205]: time="2025-11-01T00:43:01.252807228Z" level=info msg="StartContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" returns successfully" Nov 1 00:43:01.517463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9-rootfs.mount: Deactivated successfully. Nov 1 00:43:01.640637 kubelet[1960]: E1101 00:43:01.640574 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:01.642930 kubelet[1960]: E1101 00:43:01.642886 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:01.649929 env[1205]: time="2025-11-01T00:43:01.649862171Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:43:01.669824 env[1205]: time="2025-11-01T00:43:01.669751456Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\"" Nov 1 00:43:01.670438 env[1205]: time="2025-11-01T00:43:01.670406020Z" level=info msg="StartContainer for \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\"" Nov 1 00:43:01.707946 systemd[1]: Started cri-containerd-02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e.scope. Nov 1 00:43:01.722289 kubelet[1960]: I1101 00:43:01.722217 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xcf5c" podStartSLOduration=1.428262079 podStartE2EDuration="16.722198335s" podCreationTimestamp="2025-11-01 00:42:45 +0000 UTC" firstStartedPulling="2025-11-01 00:42:45.882468432 +0000 UTC m=+7.403153173" lastFinishedPulling="2025-11-01 00:43:01.176404698 +0000 UTC m=+22.697089429" observedRunningTime="2025-11-01 00:43:01.668785305 +0000 UTC m=+23.189470056" watchObservedRunningTime="2025-11-01 00:43:01.722198335 +0000 UTC m=+23.242883056" Nov 1 00:43:01.742521 systemd[1]: cri-containerd-02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e.scope: Deactivated successfully. Nov 1 00:43:01.744207 env[1205]: time="2025-11-01T00:43:01.744169824Z" level=info msg="StartContainer for \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\" returns successfully" Nov 1 00:43:01.940693 env[1205]: time="2025-11-01T00:43:01.940632135Z" level=info msg="shim disconnected" id=02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e Nov 1 00:43:01.940693 env[1205]: time="2025-11-01T00:43:01.940686751Z" level=warning msg="cleaning up after shim disconnected" id=02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e namespace=k8s.io Nov 1 00:43:01.940693 env[1205]: time="2025-11-01T00:43:01.940696199Z" level=info msg="cleaning up dead shim" Nov 1 00:43:01.949063 env[1205]: time="2025-11-01T00:43:01.948904868Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2607 runtime=io.containerd.runc.v2\n" Nov 1 00:43:02.516721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e-rootfs.mount: Deactivated successfully. Nov 1 00:43:02.647373 kubelet[1960]: E1101 00:43:02.647323 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:02.647852 kubelet[1960]: E1101 00:43:02.647457 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:02.653752 env[1205]: time="2025-11-01T00:43:02.653685055Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:43:02.673528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149484740.mount: Deactivated successfully. Nov 1 00:43:02.677253 env[1205]: time="2025-11-01T00:43:02.677204259Z" level=info msg="CreateContainer within sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\"" Nov 1 00:43:02.677791 env[1205]: time="2025-11-01T00:43:02.677764019Z" level=info msg="StartContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\"" Nov 1 00:43:02.694141 systemd[1]: Started cri-containerd-47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab.scope. Nov 1 00:43:02.734172 env[1205]: time="2025-11-01T00:43:02.734114493Z" level=info msg="StartContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" returns successfully" Nov 1 00:43:02.855733 kubelet[1960]: I1101 00:43:02.855691 1960 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:43:02.928892 systemd[1]: Created slice kubepods-burstable-pod2d859f20_d263_4228_9fcd_8381c05b71f8.slice. Nov 1 00:43:02.941995 systemd[1]: Created slice kubepods-burstable-pod338ac54f_08b5_4ee9_af0a_aa561e1fcbb3.slice. Nov 1 00:43:03.056510 kubelet[1960]: I1101 00:43:03.056461 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6c8\" (UniqueName: \"kubernetes.io/projected/2d859f20-d263-4228-9fcd-8381c05b71f8-kube-api-access-7v6c8\") pod \"coredns-674b8bbfcf-r7wx6\" (UID: \"2d859f20-d263-4228-9fcd-8381c05b71f8\") " pod="kube-system/coredns-674b8bbfcf-r7wx6" Nov 1 00:43:03.056713 kubelet[1960]: I1101 00:43:03.056525 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d859f20-d263-4228-9fcd-8381c05b71f8-config-volume\") pod \"coredns-674b8bbfcf-r7wx6\" (UID: \"2d859f20-d263-4228-9fcd-8381c05b71f8\") " pod="kube-system/coredns-674b8bbfcf-r7wx6" Nov 1 00:43:03.056713 kubelet[1960]: I1101 00:43:03.056554 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t796g\" (UniqueName: \"kubernetes.io/projected/338ac54f-08b5-4ee9-af0a-aa561e1fcbb3-kube-api-access-t796g\") pod \"coredns-674b8bbfcf-9lrnw\" (UID: \"338ac54f-08b5-4ee9-af0a-aa561e1fcbb3\") " pod="kube-system/coredns-674b8bbfcf-9lrnw" Nov 1 00:43:03.056713 kubelet[1960]: I1101 00:43:03.056580 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/338ac54f-08b5-4ee9-af0a-aa561e1fcbb3-config-volume\") pod \"coredns-674b8bbfcf-9lrnw\" (UID: \"338ac54f-08b5-4ee9-af0a-aa561e1fcbb3\") " pod="kube-system/coredns-674b8bbfcf-9lrnw" Nov 1 00:43:03.236898 kubelet[1960]: E1101 00:43:03.236760 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:03.240689 env[1205]: time="2025-11-01T00:43:03.240269641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7wx6,Uid:2d859f20-d263-4228-9fcd-8381c05b71f8,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:03.248035 kubelet[1960]: E1101 00:43:03.247981 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:03.248809 env[1205]: time="2025-11-01T00:43:03.248758456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lrnw,Uid:338ac54f-08b5-4ee9-af0a-aa561e1fcbb3,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:03.655251 kubelet[1960]: E1101 00:43:03.655212 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:04.654050 kubelet[1960]: E1101 00:43:04.654009 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:04.754704 systemd-networkd[1022]: cilium_host: Link UP Nov 1 00:43:04.754862 systemd-networkd[1022]: cilium_net: Link UP Nov 1 00:43:04.757790 systemd-networkd[1022]: cilium_net: Gained carrier Nov 1 00:43:04.760544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:43:04.760698 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:43:04.760877 systemd-networkd[1022]: cilium_host: Gained carrier Nov 1 00:43:04.761086 systemd-networkd[1022]: cilium_net: Gained IPv6LL Nov 1 00:43:04.761276 systemd-networkd[1022]: cilium_host: Gained IPv6LL Nov 1 00:43:04.837480 systemd-networkd[1022]: cilium_vxlan: Link UP Nov 1 00:43:04.837488 systemd-networkd[1022]: cilium_vxlan: Gained carrier Nov 1 00:43:05.039381 kernel: NET: Registered PF_ALG protocol family Nov 1 00:43:05.655482 kubelet[1960]: E1101 00:43:05.655438 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:05.659150 systemd-networkd[1022]: lxc_health: Link UP Nov 1 00:43:05.672091 systemd-networkd[1022]: lxc_health: Gained carrier Nov 1 00:43:05.672374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:43:05.753046 kubelet[1960]: I1101 00:43:05.752966 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v8mz7" podStartSLOduration=8.062635345 podStartE2EDuration="20.752946925s" podCreationTimestamp="2025-11-01 00:42:45 +0000 UTC" firstStartedPulling="2025-11-01 00:42:45.809500482 +0000 UTC m=+7.330185213" lastFinishedPulling="2025-11-01 00:42:58.499812052 +0000 UTC m=+20.020496793" observedRunningTime="2025-11-01 00:43:03.723156015 +0000 UTC m=+25.243840746" watchObservedRunningTime="2025-11-01 00:43:05.752946925 +0000 UTC m=+27.273631647" Nov 1 00:43:05.781545 systemd-networkd[1022]: lxcb072c2afec16: Link UP Nov 1 00:43:05.793379 kernel: eth0: renamed from tmp72db1 Nov 1 00:43:05.801503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:05.801595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb072c2afec16: link becomes ready Nov 1 00:43:05.801754 systemd-networkd[1022]: lxcb072c2afec16: Gained carrier Nov 1 00:43:05.802910 systemd-networkd[1022]: lxc9f3d6ae5a188: Link UP Nov 1 00:43:05.809370 kernel: eth0: renamed from tmpf572c Nov 1 00:43:05.818848 systemd-networkd[1022]: lxc9f3d6ae5a188: Gained carrier Nov 1 00:43:05.819426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f3d6ae5a188: link becomes ready Nov 1 00:43:05.881486 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL Nov 1 00:43:06.657644 kubelet[1960]: E1101 00:43:06.657607 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:06.905536 systemd-networkd[1022]: lxcb072c2afec16: Gained IPv6LL Nov 1 00:43:06.969523 systemd-networkd[1022]: lxc9f3d6ae5a188: Gained IPv6LL Nov 1 00:43:07.545535 systemd-networkd[1022]: lxc_health: Gained IPv6LL Nov 1 00:43:07.658545 kubelet[1960]: E1101 00:43:07.658506 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:08.660911 kubelet[1960]: E1101 00:43:08.660862 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:09.389051 env[1205]: time="2025-11-01T00:43:09.388967319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:09.389051 env[1205]: time="2025-11-01T00:43:09.389009110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:09.389051 env[1205]: time="2025-11-01T00:43:09.389019650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:09.389520 env[1205]: time="2025-11-01T00:43:09.389221091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c pid=3183 runtime=io.containerd.runc.v2 Nov 1 00:43:09.406545 systemd[1]: run-containerd-runc-k8s.io-72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c-runc.FYEYS2.mount: Deactivated successfully. Nov 1 00:43:09.411897 systemd[1]: Started cri-containerd-72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c.scope. Nov 1 00:43:09.420339 env[1205]: time="2025-11-01T00:43:09.420226169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:09.420339 env[1205]: time="2025-11-01T00:43:09.420295753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:09.420339 env[1205]: time="2025-11-01T00:43:09.420310572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:09.420702 env[1205]: time="2025-11-01T00:43:09.420589934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f572c4c8619897f14ddebdca3d11177b385c380351397a1606fcb781cbab0a3d pid=3218 runtime=io.containerd.runc.v2 Nov 1 00:43:09.430806 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:43:09.439577 systemd[1]: Started cri-containerd-f572c4c8619897f14ddebdca3d11177b385c380351397a1606fcb781cbab0a3d.scope. Nov 1 00:43:09.452963 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:43:09.459259 env[1205]: time="2025-11-01T00:43:09.459207644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r7wx6,Uid:2d859f20-d263-4228-9fcd-8381c05b71f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c\"" Nov 1 00:43:09.460112 kubelet[1960]: E1101 00:43:09.460078 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:09.480963 env[1205]: time="2025-11-01T00:43:09.480903629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lrnw,Uid:338ac54f-08b5-4ee9-af0a-aa561e1fcbb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f572c4c8619897f14ddebdca3d11177b385c380351397a1606fcb781cbab0a3d\"" Nov 1 00:43:09.481621 kubelet[1960]: E1101 00:43:09.481591 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:09.518929 env[1205]: time="2025-11-01T00:43:09.518872070Z" level=info msg="CreateContainer within sandbox \"72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:09.527616 env[1205]: time="2025-11-01T00:43:09.527548697Z" level=info msg="CreateContainer within sandbox \"f572c4c8619897f14ddebdca3d11177b385c380351397a1606fcb781cbab0a3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:09.564091 env[1205]: time="2025-11-01T00:43:09.564029142Z" level=info msg="CreateContainer within sandbox \"72db17680da1694a62a95426a26681684925d2d73facc44e7462eb076ff8750c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f70a9a7bc5310af59c650f0fb90ad5905ff9e1b9f751af27644f3701799f4883\"" Nov 1 00:43:09.566159 env[1205]: time="2025-11-01T00:43:09.566098074Z" level=info msg="CreateContainer within sandbox \"f572c4c8619897f14ddebdca3d11177b385c380351397a1606fcb781cbab0a3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3b263970331f5511a6bb4cbc5504c4e44b4bedc9a35f9eaf112ade3d76c499c\"" Nov 1 00:43:09.566441 env[1205]: time="2025-11-01T00:43:09.566288644Z" level=info msg="StartContainer for \"f70a9a7bc5310af59c650f0fb90ad5905ff9e1b9f751af27644f3701799f4883\"" Nov 1 00:43:09.566559 env[1205]: time="2025-11-01T00:43:09.566526996Z" level=info msg="StartContainer for \"a3b263970331f5511a6bb4cbc5504c4e44b4bedc9a35f9eaf112ade3d76c499c\"" Nov 1 00:43:09.584837 systemd[1]: Started cri-containerd-f70a9a7bc5310af59c650f0fb90ad5905ff9e1b9f751af27644f3701799f4883.scope. Nov 1 00:43:09.592440 systemd[1]: Started cri-containerd-a3b263970331f5511a6bb4cbc5504c4e44b4bedc9a35f9eaf112ade3d76c499c.scope. Nov 1 00:43:09.898682 env[1205]: time="2025-11-01T00:43:09.898604847Z" level=info msg="StartContainer for \"f70a9a7bc5310af59c650f0fb90ad5905ff9e1b9f751af27644f3701799f4883\" returns successfully" Nov 1 00:43:10.209417 env[1205]: time="2025-11-01T00:43:10.209239975Z" level=info msg="StartContainer for \"a3b263970331f5511a6bb4cbc5504c4e44b4bedc9a35f9eaf112ade3d76c499c\" returns successfully" Nov 1 00:43:10.212759 kubelet[1960]: E1101 00:43:10.212659 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:10.234145 kubelet[1960]: I1101 00:43:10.233543 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r7wx6" podStartSLOduration=25.233524553 podStartE2EDuration="25.233524553s" podCreationTimestamp="2025-11-01 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:10.233268727 +0000 UTC m=+31.753953458" watchObservedRunningTime="2025-11-01 00:43:10.233524553 +0000 UTC m=+31.754209284" Nov 1 00:43:11.214865 kubelet[1960]: E1101 00:43:11.214731 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:11.214865 kubelet[1960]: E1101 00:43:11.214731 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:11.226870 kubelet[1960]: I1101 00:43:11.226798 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9lrnw" podStartSLOduration=26.226781564 podStartE2EDuration="26.226781564s" podCreationTimestamp="2025-11-01 00:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:11.226631854 +0000 UTC m=+32.747316585" watchObservedRunningTime="2025-11-01 00:43:11.226781564 +0000 UTC m=+32.747466295" Nov 1 00:43:12.216329 kubelet[1960]: E1101 00:43:12.216290 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:15.688460 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:54570.service. Nov 1 00:43:15.727187 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 54570 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:15.728672 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:15.734504 systemd-logind[1194]: New session 6 of user core. Nov 1 00:43:15.736819 systemd[1]: Started session-6.scope. Nov 1 00:43:15.894212 sshd[3342]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:15.896737 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:54570.service: Deactivated successfully. Nov 1 00:43:15.897559 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:43:15.898076 systemd-logind[1194]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:43:15.898963 systemd-logind[1194]: Removed session 6. Nov 1 00:43:20.899957 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:35878.service. Nov 1 00:43:20.933471 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:20.934691 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:20.938096 systemd-logind[1194]: New session 7 of user core. Nov 1 00:43:20.938938 systemd[1]: Started session-7.scope. Nov 1 00:43:21.054385 sshd[3359]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:21.057054 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:35878.service: Deactivated successfully. Nov 1 00:43:21.057836 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:21.058610 systemd-logind[1194]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:21.059441 systemd-logind[1194]: Removed session 7. Nov 1 00:43:21.216241 kubelet[1960]: E1101 00:43:21.216022 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:21.235936 kubelet[1960]: E1101 00:43:21.235898 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:26.059268 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:35882.service. Nov 1 00:43:26.093261 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 35882 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:26.094689 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:26.098535 systemd-logind[1194]: New session 8 of user core. Nov 1 00:43:26.099678 systemd[1]: Started session-8.scope. Nov 1 00:43:26.217505 sshd[3377]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:26.219812 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:35882.service: Deactivated successfully. Nov 1 00:43:26.220619 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:26.221274 systemd-logind[1194]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:26.222025 systemd-logind[1194]: Removed session 8. Nov 1 00:43:31.221921 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:41410.service. Nov 1 00:43:31.298559 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 41410 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:31.299979 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:31.305461 systemd-logind[1194]: New session 9 of user core. Nov 1 00:43:31.306229 systemd[1]: Started session-9.scope. Nov 1 00:43:31.428430 sshd[3392]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:31.431840 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:41410.service: Deactivated successfully. Nov 1 00:43:31.432639 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:43:31.433124 systemd-logind[1194]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:43:31.433893 systemd-logind[1194]: Removed session 9. Nov 1 00:43:36.433105 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:41426.service. Nov 1 00:43:36.465928 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 41426 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:36.467581 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:36.472064 systemd-logind[1194]: New session 10 of user core. Nov 1 00:43:36.472899 systemd[1]: Started session-10.scope. Nov 1 00:43:36.586060 sshd[3406]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:36.589306 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:41426.service: Deactivated successfully. Nov 1 00:43:36.589883 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:43:36.590537 systemd-logind[1194]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:43:36.591652 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:41438.service. Nov 1 00:43:36.592407 systemd-logind[1194]: Removed session 10. Nov 1 00:43:36.621986 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:36.623290 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:36.627105 systemd-logind[1194]: New session 11 of user core. Nov 1 00:43:36.627946 systemd[1]: Started session-11.scope. Nov 1 00:43:36.795692 sshd[3421]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:36.800332 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:41442.service. Nov 1 00:43:36.808912 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:41438.service: Deactivated successfully. Nov 1 00:43:36.809912 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:43:36.810690 systemd-logind[1194]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:43:36.811767 systemd-logind[1194]: Removed session 11. Nov 1 00:43:36.839165 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 41442 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:36.840601 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:36.844936 systemd-logind[1194]: New session 12 of user core. Nov 1 00:43:36.846069 systemd[1]: Started session-12.scope. Nov 1 00:43:36.963123 sshd[3431]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:36.965970 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:41442.service: Deactivated successfully. Nov 1 00:43:36.966692 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:43:36.967496 systemd-logind[1194]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:43:36.968176 systemd-logind[1194]: Removed session 12. Nov 1 00:43:41.967489 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:49094.service. Nov 1 00:43:41.996829 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 49094 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:41.997870 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:42.001140 systemd-logind[1194]: New session 13 of user core. Nov 1 00:43:42.002243 systemd[1]: Started session-13.scope. Nov 1 00:43:42.109940 sshd[3448]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:42.112486 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:49094.service: Deactivated successfully. Nov 1 00:43:42.113390 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:43:42.113942 systemd-logind[1194]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:43:42.114739 systemd-logind[1194]: Removed session 13. Nov 1 00:43:47.119230 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:49104.service. Nov 1 00:43:47.170574 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 49104 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:47.175061 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:47.184375 systemd-logind[1194]: New session 14 of user core. Nov 1 00:43:47.185551 systemd[1]: Started session-14.scope. Nov 1 00:43:47.366101 sshd[3463]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:47.370949 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:49104.service: Deactivated successfully. Nov 1 00:43:47.374407 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:43:47.379964 systemd-logind[1194]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:43:47.382958 systemd-logind[1194]: Removed session 14. Nov 1 00:43:48.589438 kubelet[1960]: E1101 00:43:48.587537 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:52.371459 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:49576.service. Nov 1 00:43:52.400898 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:52.402244 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:52.405949 systemd-logind[1194]: New session 15 of user core. Nov 1 00:43:52.407200 systemd[1]: Started session-15.scope. Nov 1 00:43:52.589134 sshd[3477]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:52.593456 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:49576.service: Deactivated successfully. Nov 1 00:43:52.594224 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:43:52.594892 systemd-logind[1194]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:43:52.596457 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:49584.service. Nov 1 00:43:52.597258 systemd-logind[1194]: Removed session 15. Nov 1 00:43:52.630028 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 49584 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:52.631319 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:52.634770 systemd-logind[1194]: New session 16 of user core. Nov 1 00:43:52.635654 systemd[1]: Started session-16.scope. Nov 1 00:43:52.948422 sshd[3490]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:52.951768 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:49584.service: Deactivated successfully. Nov 1 00:43:52.952452 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:43:52.953144 systemd-logind[1194]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:43:52.954406 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:49590.service. Nov 1 00:43:52.955230 systemd-logind[1194]: Removed session 16. Nov 1 00:43:52.983816 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 49590 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:52.984862 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:52.988109 systemd-logind[1194]: New session 17 of user core. Nov 1 00:43:52.989040 systemd[1]: Started session-17.scope. Nov 1 00:43:53.571948 kubelet[1960]: E1101 00:43:53.571900 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:43:53.627610 sshd[3501]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:53.630293 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:49596.service. Nov 1 00:43:53.632476 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:49590.service: Deactivated successfully. Nov 1 00:43:53.633780 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:43:53.636235 systemd-logind[1194]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:43:53.637410 systemd-logind[1194]: Removed session 17. Nov 1 00:43:53.665875 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 49596 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:53.667858 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:53.672181 systemd-logind[1194]: New session 18 of user core. Nov 1 00:43:53.673034 systemd[1]: Started session-18.scope. Nov 1 00:43:54.072246 sshd[3521]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:54.078192 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:49598.service. Nov 1 00:43:54.079116 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:49596.service: Deactivated successfully. Nov 1 00:43:54.080016 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:43:54.081255 systemd-logind[1194]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:43:54.082270 systemd-logind[1194]: Removed session 18. Nov 1 00:43:54.110728 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 49598 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:54.112499 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:54.116710 systemd-logind[1194]: New session 19 of user core. Nov 1 00:43:54.118385 systemd[1]: Started session-19.scope. Nov 1 00:43:54.377273 sshd[3533]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:54.380792 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:49598.service: Deactivated successfully. Nov 1 00:43:54.381867 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:43:54.382592 systemd-logind[1194]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:43:54.383539 systemd-logind[1194]: Removed session 19. Nov 1 00:43:59.382126 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:49614.service. Nov 1 00:43:59.411199 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:43:59.412432 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:59.416759 systemd-logind[1194]: New session 20 of user core. Nov 1 00:43:59.417641 systemd[1]: Started session-20.scope. Nov 1 00:43:59.531048 sshd[3547]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:59.534137 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:49614.service: Deactivated successfully. Nov 1 00:43:59.535275 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:43:59.535911 systemd-logind[1194]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:43:59.536764 systemd-logind[1194]: Removed session 20. Nov 1 00:44:04.535757 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:60974.service. Nov 1 00:44:04.565387 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 60974 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:04.566692 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:04.571093 systemd-logind[1194]: New session 21 of user core. Nov 1 00:44:04.572042 systemd[1]: Started session-21.scope. Nov 1 00:44:04.701320 sshd[3563]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:04.704281 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:60974.service: Deactivated successfully. Nov 1 00:44:04.705204 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:44:04.705786 systemd-logind[1194]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:44:04.706586 systemd-logind[1194]: Removed session 21. Nov 1 00:44:06.571868 kubelet[1960]: E1101 00:44:06.571805 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:09.572730 kubelet[1960]: E1101 00:44:09.572681 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:09.573208 kubelet[1960]: E1101 00:44:09.572774 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:09.705678 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:60982.service. Nov 1 00:44:09.734214 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 60982 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:09.735308 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:09.739734 systemd-logind[1194]: New session 22 of user core. Nov 1 00:44:09.740734 systemd[1]: Started session-22.scope. Nov 1 00:44:09.860100 sshd[3577]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:09.864254 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:60982.service: Deactivated successfully. Nov 1 00:44:09.864856 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:44:09.865417 systemd-logind[1194]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:44:09.866562 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:60994.service. Nov 1 00:44:09.870772 systemd-logind[1194]: Removed session 22. Nov 1 00:44:09.895686 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 60994 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:09.896822 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:09.900082 systemd-logind[1194]: New session 23 of user core. Nov 1 00:44:09.900882 systemd[1]: Started session-23.scope. Nov 1 00:44:11.467746 env[1205]: time="2025-11-01T00:44:11.467681127Z" level=info msg="StopContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" with timeout 30 (s)" Nov 1 00:44:11.470583 env[1205]: time="2025-11-01T00:44:11.468310070Z" level=info msg="Stop container \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" with signal terminated" Nov 1 00:44:11.482206 systemd[1]: cri-containerd-2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f.scope: Deactivated successfully. Nov 1 00:44:11.493654 env[1205]: time="2025-11-01T00:44:11.493518821Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:44:11.497923 env[1205]: time="2025-11-01T00:44:11.497881448Z" level=info msg="StopContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" with timeout 2 (s)" Nov 1 00:44:11.498508 env[1205]: time="2025-11-01T00:44:11.498482278Z" level=info msg="Stop container \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" with signal terminated" Nov 1 00:44:11.502752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f-rootfs.mount: Deactivated successfully. Nov 1 00:44:11.506075 systemd-networkd[1022]: lxc_health: Link DOWN Nov 1 00:44:11.506082 systemd-networkd[1022]: lxc_health: Lost carrier Nov 1 00:44:11.507841 env[1205]: time="2025-11-01T00:44:11.507788673Z" level=info msg="shim disconnected" id=2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f Nov 1 00:44:11.507939 env[1205]: time="2025-11-01T00:44:11.507842987Z" level=warning msg="cleaning up after shim disconnected" id=2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f namespace=k8s.io Nov 1 00:44:11.507939 env[1205]: time="2025-11-01T00:44:11.507853367Z" level=info msg="cleaning up dead shim" Nov 1 00:44:11.514290 env[1205]: time="2025-11-01T00:44:11.514224196Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3644 runtime=io.containerd.runc.v2\n" Nov 1 00:44:11.516917 env[1205]: time="2025-11-01T00:44:11.516883615Z" level=info msg="StopContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" returns successfully" Nov 1 00:44:11.517924 env[1205]: time="2025-11-01T00:44:11.517497569Z" level=info msg="StopPodSandbox for \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\"" Nov 1 00:44:11.517924 env[1205]: time="2025-11-01T00:44:11.517572863Z" level=info msg="Container to stop \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.519517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec-shm.mount: Deactivated successfully. Nov 1 00:44:11.545772 systemd[1]: cri-containerd-c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec.scope: Deactivated successfully. Nov 1 00:44:11.552756 systemd[1]: cri-containerd-47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab.scope: Deactivated successfully. Nov 1 00:44:11.553062 systemd[1]: cri-containerd-47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab.scope: Consumed 6.583s CPU time. Nov 1 00:44:11.569479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab-rootfs.mount: Deactivated successfully. Nov 1 00:44:11.572460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec-rootfs.mount: Deactivated successfully. Nov 1 00:44:11.579535 env[1205]: time="2025-11-01T00:44:11.579482062Z" level=info msg="shim disconnected" id=c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec Nov 1 00:44:11.579702 env[1205]: time="2025-11-01T00:44:11.579537999Z" level=warning msg="cleaning up after shim disconnected" id=c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec namespace=k8s.io Nov 1 00:44:11.579702 env[1205]: time="2025-11-01T00:44:11.579555051Z" level=info msg="cleaning up dead shim" Nov 1 00:44:11.579702 env[1205]: time="2025-11-01T00:44:11.579557085Z" level=info msg="shim disconnected" id=47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab Nov 1 00:44:11.579702 env[1205]: time="2025-11-01T00:44:11.579583195Z" level=warning msg="cleaning up after shim disconnected" id=47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab namespace=k8s.io Nov 1 00:44:11.579702 env[1205]: time="2025-11-01T00:44:11.579591962Z" level=info msg="cleaning up dead shim" Nov 1 00:44:11.586158 env[1205]: time="2025-11-01T00:44:11.586094413Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3691 runtime=io.containerd.runc.v2\n" Nov 1 00:44:11.587422 env[1205]: time="2025-11-01T00:44:11.587331159Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3690 runtime=io.containerd.runc.v2\n" Nov 1 00:44:11.587682 env[1205]: time="2025-11-01T00:44:11.587645360Z" level=info msg="TearDown network for sandbox \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\" successfully" Nov 1 00:44:11.587682 env[1205]: time="2025-11-01T00:44:11.587675107Z" level=info msg="StopPodSandbox for \"c46f83675d3fa24df896efac19230ce44cafa1cb9f7e88a84ca155cac3deccec\" returns successfully" Nov 1 00:44:11.588551 env[1205]: time="2025-11-01T00:44:11.588521307Z" level=info msg="StopContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" returns successfully" Nov 1 00:44:11.589055 env[1205]: time="2025-11-01T00:44:11.589028427Z" level=info msg="StopPodSandbox for \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\"" Nov 1 00:44:11.589245 env[1205]: time="2025-11-01T00:44:11.589102298Z" level=info msg="Container to stop \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.589245 env[1205]: time="2025-11-01T00:44:11.589125562Z" level=info msg="Container to stop \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.589245 env[1205]: time="2025-11-01T00:44:11.589138788Z" level=info msg="Container to stop \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.589245 env[1205]: time="2025-11-01T00:44:11.589151582Z" level=info msg="Container to stop \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.589245 env[1205]: time="2025-11-01T00:44:11.589164036Z" level=info msg="Container to stop \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:11.595345 systemd[1]: cri-containerd-b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6.scope: Deactivated successfully. Nov 1 00:44:11.617985 env[1205]: time="2025-11-01T00:44:11.617933069Z" level=info msg="shim disconnected" id=b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6 Nov 1 00:44:11.617985 env[1205]: time="2025-11-01T00:44:11.617985409Z" level=warning msg="cleaning up after shim disconnected" id=b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6 namespace=k8s.io Nov 1 00:44:11.618192 env[1205]: time="2025-11-01T00:44:11.617996550Z" level=info msg="cleaning up dead shim" Nov 1 00:44:11.625120 env[1205]: time="2025-11-01T00:44:11.625071938Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n" Nov 1 00:44:11.625632 env[1205]: time="2025-11-01T00:44:11.625602743Z" level=info msg="TearDown network for sandbox \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" successfully" Nov 1 00:44:11.625737 env[1205]: time="2025-11-01T00:44:11.625710499Z" level=info msg="StopPodSandbox for \"b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6\" returns successfully" Nov 1 00:44:11.677617 kubelet[1960]: I1101 00:44:11.677553 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-cilium-config-path\") pod \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\" (UID: \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\") " Nov 1 00:44:11.677617 kubelet[1960]: I1101 00:44:11.677621 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbfph\" (UniqueName: \"kubernetes.io/projected/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-kube-api-access-wbfph\") pod \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\" (UID: \"0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71\") " Nov 1 00:44:11.680177 kubelet[1960]: I1101 00:44:11.680137 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71" (UID: "0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:11.680920 kubelet[1960]: I1101 00:44:11.680890 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-kube-api-access-wbfph" (OuterVolumeSpecName: "kube-api-access-wbfph") pod "0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71" (UID: "0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71"). InnerVolumeSpecName "kube-api-access-wbfph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778052 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cni-path\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778099 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-xtables-lock\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778120 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-run\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778145 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dwm4\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-kube-api-access-8dwm4\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778183 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a02643a7-f7f3-447b-8eca-ff1c75038e9e-clustermesh-secrets\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779227 kubelet[1960]: I1101 00:44:11.778054 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cni-path" (OuterVolumeSpecName: "cni-path") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778210 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hostproc\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778185 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778233 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-bpf-maps\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778246 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-net\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778260 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-etc-cni-netd\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779515 kubelet[1960]: I1101 00:44:11.778291 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-cgroup\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778312 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hubble-tls\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778332 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-lib-modules\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778371 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-config-path\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778394 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-kernel\") pod \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\" (UID: \"a02643a7-f7f3-447b-8eca-ff1c75038e9e\") " Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778434 1960 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbfph\" (UniqueName: \"kubernetes.io/projected/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-kube-api-access-wbfph\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778447 1960 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.779683 kubelet[1960]: I1101 00:44:11.778458 1960 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.779863 kubelet[1960]: I1101 00:44:11.778469 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.779863 kubelet[1960]: I1101 00:44:11.778502 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.779863 kubelet[1960]: I1101 00:44:11.778496 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.779863 kubelet[1960]: I1101 00:44:11.778537 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.779863 kubelet[1960]: I1101 00:44:11.778550 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hostproc" (OuterVolumeSpecName: "hostproc") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.780045 kubelet[1960]: I1101 00:44:11.778578 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.780045 kubelet[1960]: I1101 00:44:11.778592 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.780045 kubelet[1960]: I1101 00:44:11.779148 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.780193 kubelet[1960]: I1101 00:44:11.780170 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:11.781693 kubelet[1960]: I1101 00:44:11.781660 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02643a7-f7f3-447b-8eca-ff1c75038e9e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:11.781805 kubelet[1960]: I1101 00:44:11.781769 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:11.782304 kubelet[1960]: I1101 00:44:11.782250 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-kube-api-access-8dwm4" (OuterVolumeSpecName: "kube-api-access-8dwm4") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "kube-api-access-8dwm4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:11.782395 kubelet[1960]: I1101 00:44:11.782374 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a02643a7-f7f3-447b-8eca-ff1c75038e9e" (UID: "a02643a7-f7f3-447b-8eca-ff1c75038e9e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:11.878867 kubelet[1960]: I1101 00:44:11.878816 1960 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a02643a7-f7f3-447b-8eca-ff1c75038e9e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.878867 kubelet[1960]: I1101 00:44:11.878847 1960 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.878867 kubelet[1960]: I1101 00:44:11.878861 1960 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.878867 kubelet[1960]: I1101 00:44:11.878876 1960 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878889 1960 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878897 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878906 1960 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878915 1960 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878925 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878933 1960 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878940 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a02643a7-f7f3-447b-8eca-ff1c75038e9e-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:11.879141 kubelet[1960]: I1101 00:44:11.878948 1960 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8dwm4\" (UniqueName: \"kubernetes.io/projected/a02643a7-f7f3-447b-8eca-ff1c75038e9e-kube-api-access-8dwm4\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:12.343641 kubelet[1960]: I1101 00:44:12.343592 1960 scope.go:117] "RemoveContainer" containerID="2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f" Nov 1 00:44:12.345362 env[1205]: time="2025-11-01T00:44:12.345304581Z" level=info msg="RemoveContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\"" Nov 1 00:44:12.347427 systemd[1]: Removed slice kubepods-besteffort-pod0e91980c_3f1d_4d0b_b726_3dcf4fe7ba71.slice. Nov 1 00:44:12.349047 env[1205]: time="2025-11-01T00:44:12.349005133Z" level=info msg="RemoveContainer for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" returns successfully" Nov 1 00:44:12.349245 kubelet[1960]: I1101 00:44:12.349224 1960 scope.go:117] "RemoveContainer" containerID="2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f" Nov 1 00:44:12.350790 env[1205]: time="2025-11-01T00:44:12.350667433Z" level=error msg="ContainerStatus for \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\": not found" Nov 1 00:44:12.350914 kubelet[1960]: E1101 00:44:12.350885 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\": not found" containerID="2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f" Nov 1 00:44:12.350974 kubelet[1960]: I1101 00:44:12.350917 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f"} err="failed to get container status \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2319a0e7456ee56ecd49083d578e9bb8945acd90bd502ec886169ed25bf9804f\": not found" Nov 1 00:44:12.351008 kubelet[1960]: I1101 00:44:12.350975 1960 scope.go:117] "RemoveContainer" containerID="47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab" Nov 1 00:44:12.351525 systemd[1]: Removed slice kubepods-burstable-poda02643a7_f7f3_447b_8eca_ff1c75038e9e.slice. Nov 1 00:44:12.351594 systemd[1]: kubepods-burstable-poda02643a7_f7f3_447b_8eca_ff1c75038e9e.slice: Consumed 6.718s CPU time. Nov 1 00:44:12.352318 env[1205]: time="2025-11-01T00:44:12.352274088Z" level=info msg="RemoveContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\"" Nov 1 00:44:12.396982 env[1205]: time="2025-11-01T00:44:12.396915331Z" level=info msg="RemoveContainer for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" returns successfully" Nov 1 00:44:12.397412 kubelet[1960]: I1101 00:44:12.397367 1960 scope.go:117] "RemoveContainer" containerID="02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e" Nov 1 00:44:12.399028 env[1205]: time="2025-11-01T00:44:12.398983439Z" level=info msg="RemoveContainer for \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\"" Nov 1 00:44:12.451028 env[1205]: time="2025-11-01T00:44:12.449625535Z" level=info msg="RemoveContainer for \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\" returns successfully" Nov 1 00:44:12.451235 kubelet[1960]: I1101 00:44:12.450684 1960 scope.go:117] "RemoveContainer" containerID="89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9" Nov 1 00:44:12.452092 env[1205]: time="2025-11-01T00:44:12.452051347Z" level=info msg="RemoveContainer for \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\"" Nov 1 00:44:12.475951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6-rootfs.mount: Deactivated successfully. Nov 1 00:44:12.476037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6a6211d114c274de9eeb15307a8d8ee3055692bbe06d12b854f56e3d0a5ebb6-shm.mount: Deactivated successfully. Nov 1 00:44:12.476095 systemd[1]: var-lib-kubelet-pods-0e91980c\x2d3f1d\x2d4d0b\x2db726\x2d3dcf4fe7ba71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbfph.mount: Deactivated successfully. Nov 1 00:44:12.476152 systemd[1]: var-lib-kubelet-pods-a02643a7\x2df7f3\x2d447b\x2d8eca\x2dff1c75038e9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8dwm4.mount: Deactivated successfully. Nov 1 00:44:12.476207 systemd[1]: var-lib-kubelet-pods-a02643a7\x2df7f3\x2d447b\x2d8eca\x2dff1c75038e9e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:12.476259 systemd[1]: var-lib-kubelet-pods-a02643a7\x2df7f3\x2d447b\x2d8eca\x2dff1c75038e9e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:12.487013 env[1205]: time="2025-11-01T00:44:12.486953674Z" level=info msg="RemoveContainer for \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\" returns successfully" Nov 1 00:44:12.487381 kubelet[1960]: I1101 00:44:12.487226 1960 scope.go:117] "RemoveContainer" containerID="adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7" Nov 1 00:44:12.488216 env[1205]: time="2025-11-01T00:44:12.488191262Z" level=info msg="RemoveContainer for \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\"" Nov 1 00:44:12.573377 env[1205]: time="2025-11-01T00:44:12.573317937Z" level=info msg="RemoveContainer for \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\" returns successfully" Nov 1 00:44:12.573658 kubelet[1960]: I1101 00:44:12.573626 1960 scope.go:117] "RemoveContainer" containerID="1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b" Nov 1 00:44:12.574056 kubelet[1960]: I1101 00:44:12.574024 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71" path="/var/lib/kubelet/pods/0e91980c-3f1d-4d0b-b726-3dcf4fe7ba71/volumes" Nov 1 00:44:12.574780 env[1205]: time="2025-11-01T00:44:12.574760336Z" level=info msg="RemoveContainer for \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\"" Nov 1 00:44:12.596763 env[1205]: time="2025-11-01T00:44:12.595576088Z" level=info msg="RemoveContainer for \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\" returns successfully" Nov 1 00:44:12.596763 env[1205]: time="2025-11-01T00:44:12.596030638Z" level=error msg="ContainerStatus for \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\": not found" Nov 1 00:44:12.597209 kubelet[1960]: I1101 00:44:12.595807 1960 scope.go:117] "RemoveContainer" containerID="47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab" Nov 1 00:44:12.597209 kubelet[1960]: E1101 00:44:12.596715 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\": not found" containerID="47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab" Nov 1 00:44:12.597209 kubelet[1960]: I1101 00:44:12.596776 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab"} err="failed to get container status \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"47de4e946622adc3fc59093f1a32d798d92de203dc5f50df47b8f052a29161ab\": not found" Nov 1 00:44:12.597209 kubelet[1960]: I1101 00:44:12.596844 1960 scope.go:117] "RemoveContainer" containerID="02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e" Nov 1 00:44:12.597449 env[1205]: time="2025-11-01T00:44:12.597022987Z" level=error msg="ContainerStatus for \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\": not found" Nov 1 00:44:12.597487 kubelet[1960]: E1101 00:44:12.597228 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\": not found" containerID="02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e" Nov 1 00:44:12.597487 kubelet[1960]: I1101 00:44:12.597247 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e"} err="failed to get container status \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"02edaddda1610bb707bb51f715d2dc5a1e1030f367d7a4a9e96a005dcd306a4e\": not found" Nov 1 00:44:12.597487 kubelet[1960]: I1101 00:44:12.597262 1960 scope.go:117] "RemoveContainer" containerID="89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9" Nov 1 00:44:12.597604 env[1205]: time="2025-11-01T00:44:12.597440246Z" level=error msg="ContainerStatus for \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\": not found" Nov 1 00:44:12.597646 kubelet[1960]: E1101 00:44:12.597536 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\": not found" containerID="89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9" Nov 1 00:44:12.597646 kubelet[1960]: I1101 00:44:12.597574 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9"} err="failed to get container status \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"89cc701f9914fe7032dba47cabb39228fab4e21d5bac101bb39ad3189303c1c9\": not found" Nov 1 00:44:12.597646 kubelet[1960]: I1101 00:44:12.597594 1960 scope.go:117] "RemoveContainer" containerID="adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7" Nov 1 00:44:12.597807 env[1205]: time="2025-11-01T00:44:12.597738015Z" level=error msg="ContainerStatus for \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\": not found" Nov 1 00:44:12.597892 kubelet[1960]: E1101 00:44:12.597875 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\": not found" containerID="adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7" Nov 1 00:44:12.597937 kubelet[1960]: I1101 00:44:12.597893 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7"} err="failed to get container status \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"adc607bdbc1f1e206d4a19d3c7083425a9defadfc5b8b17aa81a05e6dbe771f7\": not found" Nov 1 00:44:12.597937 kubelet[1960]: I1101 00:44:12.597905 1960 scope.go:117] "RemoveContainer" containerID="1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b" Nov 1 00:44:12.598076 env[1205]: time="2025-11-01T00:44:12.598037639Z" level=error msg="ContainerStatus for \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\": not found" Nov 1 00:44:12.598156 kubelet[1960]: E1101 00:44:12.598135 1960 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\": not found" containerID="1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b" Nov 1 00:44:12.598199 kubelet[1960]: I1101 00:44:12.598153 1960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b"} err="failed to get container status \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d994f5339934438b5fdabfe592740cb3f820aad1be1b9243e90d2d3a0d5d83b\": not found" Nov 1 00:44:13.406901 sshd[3590]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:13.409792 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:60994.service: Deactivated successfully. Nov 1 00:44:13.410515 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:44:13.411358 systemd-logind[1194]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:44:13.412568 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:33092.service. Nov 1 00:44:13.413397 systemd-logind[1194]: Removed session 23. Nov 1 00:44:13.446314 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:13.447635 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:13.452870 systemd-logind[1194]: New session 24 of user core. Nov 1 00:44:13.453949 systemd[1]: Started session-24.scope. Nov 1 00:44:13.616878 kubelet[1960]: E1101 00:44:13.616821 1960 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:14.023105 sshd[3751]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:14.028840 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:33100.service. Nov 1 00:44:14.032072 systemd-logind[1194]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:44:14.033933 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:33092.service: Deactivated successfully. Nov 1 00:44:14.035029 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:44:14.037004 systemd-logind[1194]: Removed session 24. Nov 1 00:44:14.057534 systemd[1]: Created slice kubepods-burstable-pod29903b5a_cd6c_494c_8c10_8033443a400c.slice. Nov 1 00:44:14.068476 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 33100 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:14.070067 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:14.075452 systemd[1]: Started session-25.scope. Nov 1 00:44:14.076259 systemd-logind[1194]: New session 25 of user core. Nov 1 00:44:14.191770 kubelet[1960]: I1101 00:44:14.191716 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-etc-cni-netd\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191770 kubelet[1960]: I1101 00:44:14.191771 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-xtables-lock\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191794 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-hubble-tls\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191812 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cni-path\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191840 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-lib-modules\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191859 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-ipsec-secrets\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191877 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-net\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.191990 kubelet[1960]: I1101 00:44:14.191893 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-run\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.191916 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-bpf-maps\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.191941 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7kr\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-kube-api-access-zt7kr\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.191961 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-config-path\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.191977 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-kernel\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.191999 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-hostproc\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192131 kubelet[1960]: I1101 00:44:14.192016 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-cgroup\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.192270 kubelet[1960]: I1101 00:44:14.192033 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-clustermesh-secrets\") pod \"cilium-spbxk\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " pod="kube-system/cilium-spbxk" Nov 1 00:44:14.211446 sshd[3762]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:14.219508 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:33100.service: Deactivated successfully. Nov 1 00:44:14.221904 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:44:14.224527 systemd-logind[1194]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:44:14.226790 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:33114.service. Nov 1 00:44:14.228661 systemd-logind[1194]: Removed session 25. Nov 1 00:44:14.239850 kubelet[1960]: E1101 00:44:14.239724 1960 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zt7kr lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-spbxk" podUID="29903b5a-cd6c-494c-8c10-8033443a400c" Nov 1 00:44:14.271824 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 33114 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:44:14.274671 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:14.281419 systemd-logind[1194]: New session 26 of user core. Nov 1 00:44:14.282061 systemd[1]: Started session-26.scope. Nov 1 00:44:14.495683 kubelet[1960]: I1101 00:44:14.495561 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-etc-cni-netd\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.495683 kubelet[1960]: I1101 00:44:14.495692 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cni-path\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495733 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-net\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495768 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-bpf-maps\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495753 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495813 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-xtables-lock\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495844 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-lib-modules\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496060 kubelet[1960]: I1101 00:44:14.495868 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-ipsec-secrets\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496420 kubelet[1960]: I1101 00:44:14.495869 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496420 kubelet[1960]: I1101 00:44:14.495885 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-cgroup\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496420 kubelet[1960]: I1101 00:44:14.495874 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cni-path" (OuterVolumeSpecName: "cni-path") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496420 kubelet[1960]: I1101 00:44:14.495895 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496420 kubelet[1960]: I1101 00:44:14.495911 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7kr\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-kube-api-access-zt7kr\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496754 kubelet[1960]: I1101 00:44:14.495929 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496754 kubelet[1960]: I1101 00:44:14.495934 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-hostproc\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496754 kubelet[1960]: I1101 00:44:14.495960 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-hubble-tls\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.496967 kubelet[1960]: I1101 00:44:14.495931 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496967 kubelet[1960]: I1101 00:44:14.495950 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.496967 kubelet[1960]: I1101 00:44:14.495977 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-hostproc" (OuterVolumeSpecName: "hostproc") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.497151 kubelet[1960]: I1101 00:44:14.497091 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-config-path\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.498892 kubelet[1960]: I1101 00:44:14.497159 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-run\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.498892 kubelet[1960]: I1101 00:44:14.498807 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-clustermesh-secrets\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.498892 kubelet[1960]: I1101 00:44:14.498846 1960 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-kernel\") pod \"29903b5a-cd6c-494c-8c10-8033443a400c\" (UID: \"29903b5a-cd6c-494c-8c10-8033443a400c\") " Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.498934 1960 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.498955 1960 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.498986 1960 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.499000 1960 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.499009 1960 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499039 kubelet[1960]: I1101 00:44:14.499025 1960 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499292 kubelet[1960]: I1101 00:44:14.499046 1960 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499292 kubelet[1960]: I1101 00:44:14.499059 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.499292 kubelet[1960]: I1101 00:44:14.499092 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.504520 kubelet[1960]: I1101 00:44:14.504436 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:14.504813 kubelet[1960]: I1101 00:44:14.504511 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:14.505018 kubelet[1960]: I1101 00:44:14.504589 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:14.506899 systemd[1]: var-lib-kubelet-pods-29903b5a\x2dcd6c\x2d494c\x2d8c10\x2d8033443a400c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzt7kr.mount: Deactivated successfully. Nov 1 00:44:14.507045 systemd[1]: var-lib-kubelet-pods-29903b5a\x2dcd6c\x2d494c\x2d8c10\x2d8033443a400c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:14.507136 systemd[1]: var-lib-kubelet-pods-29903b5a\x2dcd6c\x2d494c\x2d8c10\x2d8033443a400c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:14.508560 kubelet[1960]: I1101 00:44:14.508513 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-kube-api-access-zt7kr" (OuterVolumeSpecName: "kube-api-access-zt7kr") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "kube-api-access-zt7kr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:14.509705 kubelet[1960]: I1101 00:44:14.509638 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:14.509976 kubelet[1960]: I1101 00:44:14.509944 1960 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29903b5a-cd6c-494c-8c10-8033443a400c" (UID: "29903b5a-cd6c-494c-8c10-8033443a400c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:14.511830 systemd[1]: var-lib-kubelet-pods-29903b5a\x2dcd6c\x2d494c\x2d8c10\x2d8033443a400c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:14.575284 kubelet[1960]: I1101 00:44:14.575210 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02643a7-f7f3-447b-8eca-ff1c75038e9e" path="/var/lib/kubelet/pods/a02643a7-f7f3-447b-8eca-ff1c75038e9e/volumes" Nov 1 00:44:14.582706 systemd[1]: Removed slice kubepods-burstable-pod29903b5a_cd6c_494c_8c10_8033443a400c.slice. Nov 1 00:44:14.600187 kubelet[1960]: I1101 00:44:14.600094 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600187 kubelet[1960]: I1101 00:44:14.600145 1960 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600187 kubelet[1960]: I1101 00:44:14.600187 1960 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29903b5a-cd6c-494c-8c10-8033443a400c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600641 kubelet[1960]: I1101 00:44:14.600219 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600641 kubelet[1960]: I1101 00:44:14.600240 1960 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zt7kr\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-kube-api-access-zt7kr\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600641 kubelet[1960]: I1101 00:44:14.600262 1960 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29903b5a-cd6c-494c-8c10-8033443a400c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:14.600641 kubelet[1960]: I1101 00:44:14.600288 1960 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29903b5a-cd6c-494c-8c10-8033443a400c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:44:15.401939 systemd[1]: Created slice kubepods-burstable-podfa8f3d5e_efd5_4034_ad76_6b788d9b6b4e.slice. Nov 1 00:44:15.505612 kubelet[1960]: I1101 00:44:15.505559 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-cilium-cgroup\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.505612 kubelet[1960]: I1101 00:44:15.505605 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-cni-path\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.505612 kubelet[1960]: I1101 00:44:15.505621 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-host-proc-sys-net\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.505612 kubelet[1960]: I1101 00:44:15.505638 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-clustermesh-secrets\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505651 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-hubble-tls\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505664 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-lib-modules\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505677 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-bpf-maps\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505690 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-hostproc\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505703 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-etc-cni-netd\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506239 kubelet[1960]: I1101 00:44:15.505715 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-cilium-run\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506535 kubelet[1960]: I1101 00:44:15.505730 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-cilium-config-path\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506535 kubelet[1960]: I1101 00:44:15.505744 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-host-proc-sys-kernel\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506535 kubelet[1960]: I1101 00:44:15.505756 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-xtables-lock\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506535 kubelet[1960]: I1101 00:44:15.505767 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-cilium-ipsec-secrets\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.506535 kubelet[1960]: I1101 00:44:15.505780 1960 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk62k\" (UniqueName: \"kubernetes.io/projected/fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e-kube-api-access-rk62k\") pod \"cilium-fmn6x\" (UID: \"fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e\") " pod="kube-system/cilium-fmn6x" Nov 1 00:44:15.705888 kubelet[1960]: E1101 00:44:15.705742 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:15.706518 env[1205]: time="2025-11-01T00:44:15.706298696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmn6x,Uid:fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:15.720719 env[1205]: time="2025-11-01T00:44:15.720633248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:15.720719 env[1205]: time="2025-11-01T00:44:15.720676029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:15.720719 env[1205]: time="2025-11-01T00:44:15.720686109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:15.720941 env[1205]: time="2025-11-01T00:44:15.720850914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e pid=3807 runtime=io.containerd.runc.v2 Nov 1 00:44:15.731823 systemd[1]: Started cri-containerd-101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e.scope. Nov 1 00:44:15.756704 env[1205]: time="2025-11-01T00:44:15.756653303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmn6x,Uid:fa8f3d5e-efd5-4034-ad76-6b788d9b6b4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\"" Nov 1 00:44:15.757672 kubelet[1960]: E1101 00:44:15.757636 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:15.763812 env[1205]: time="2025-11-01T00:44:15.763764976Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:44:15.775513 env[1205]: time="2025-11-01T00:44:15.775470635Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73\"" Nov 1 00:44:15.776363 env[1205]: time="2025-11-01T00:44:15.776305643Z" level=info msg="StartContainer for \"7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73\"" Nov 1 00:44:15.790937 systemd[1]: Started cri-containerd-7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73.scope. Nov 1 00:44:15.820072 env[1205]: time="2025-11-01T00:44:15.820012102Z" level=info msg="StartContainer for \"7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73\" returns successfully" Nov 1 00:44:15.831702 systemd[1]: cri-containerd-7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73.scope: Deactivated successfully. Nov 1 00:44:15.862697 env[1205]: time="2025-11-01T00:44:15.862627183Z" level=info msg="shim disconnected" id=7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73 Nov 1 00:44:15.862697 env[1205]: time="2025-11-01T00:44:15.862687267Z" level=warning msg="cleaning up after shim disconnected" id=7e331bfa81d996cb782ed606ed4254834ae3c5164ea6ab60aaf27f079b098e73 namespace=k8s.io Nov 1 00:44:15.862697 env[1205]: time="2025-11-01T00:44:15.862699141Z" level=info msg="cleaning up dead shim" Nov 1 00:44:15.870931 env[1205]: time="2025-11-01T00:44:15.870845796Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" Nov 1 00:44:16.361886 kubelet[1960]: E1101 00:44:16.361812 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:16.367691 env[1205]: time="2025-11-01T00:44:16.367616875Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:44:16.383201 env[1205]: time="2025-11-01T00:44:16.383123264Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752\"" Nov 1 00:44:16.384000 env[1205]: time="2025-11-01T00:44:16.383927825Z" level=info msg="StartContainer for \"87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752\"" Nov 1 00:44:16.403152 systemd[1]: Started cri-containerd-87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752.scope. Nov 1 00:44:16.450096 env[1205]: time="2025-11-01T00:44:16.450002471Z" level=info msg="StartContainer for \"87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752\" returns successfully" Nov 1 00:44:16.462225 systemd[1]: cri-containerd-87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752.scope: Deactivated successfully. Nov 1 00:44:16.500747 env[1205]: time="2025-11-01T00:44:16.500642515Z" level=info msg="shim disconnected" id=87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752 Nov 1 00:44:16.500747 env[1205]: time="2025-11-01T00:44:16.500723250Z" level=warning msg="cleaning up after shim disconnected" id=87e321e7eb1cc056eb4b0934943d1ef24a3b252bcf4db5ba03c2bb6a09ab4752 namespace=k8s.io Nov 1 00:44:16.501872 env[1205]: time="2025-11-01T00:44:16.500755131Z" level=info msg="cleaning up dead shim" Nov 1 00:44:16.509519 env[1205]: time="2025-11-01T00:44:16.509441562Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3956 runtime=io.containerd.runc.v2\n" Nov 1 00:44:16.574358 kubelet[1960]: I1101 00:44:16.574285 1960 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29903b5a-cd6c-494c-8c10-8033443a400c" path="/var/lib/kubelet/pods/29903b5a-cd6c-494c-8c10-8033443a400c/volumes" Nov 1 00:44:17.365523 kubelet[1960]: E1101 00:44:17.365462 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:17.375391 env[1205]: time="2025-11-01T00:44:17.371828458Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:44:17.385675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183885342.mount: Deactivated successfully. Nov 1 00:44:17.389400 env[1205]: time="2025-11-01T00:44:17.389317065Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8\"" Nov 1 00:44:17.390142 env[1205]: time="2025-11-01T00:44:17.390113942Z" level=info msg="StartContainer for \"97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8\"" Nov 1 00:44:17.410398 systemd[1]: Started cri-containerd-97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8.scope. Nov 1 00:44:17.454219 env[1205]: time="2025-11-01T00:44:17.454149609Z" level=info msg="StartContainer for \"97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8\" returns successfully" Nov 1 00:44:17.454668 systemd[1]: cri-containerd-97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8.scope: Deactivated successfully. Nov 1 00:44:17.485086 env[1205]: time="2025-11-01T00:44:17.485019009Z" level=info msg="shim disconnected" id=97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8 Nov 1 00:44:17.485086 env[1205]: time="2025-11-01T00:44:17.485068614Z" level=warning msg="cleaning up after shim disconnected" id=97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8 namespace=k8s.io Nov 1 00:44:17.485086 env[1205]: time="2025-11-01T00:44:17.485077210Z" level=info msg="cleaning up dead shim" Nov 1 00:44:17.494296 env[1205]: time="2025-11-01T00:44:17.494241919Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" Nov 1 00:44:17.612203 systemd[1]: run-containerd-runc-k8s.io-97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8-runc.PUWvWf.mount: Deactivated successfully. Nov 1 00:44:17.612332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97d5a3c28b99adfdb037d6f58807766163d75522a0adc913cfcc329cab14e2f8-rootfs.mount: Deactivated successfully. Nov 1 00:44:18.371432 kubelet[1960]: E1101 00:44:18.371304 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:18.376715 env[1205]: time="2025-11-01T00:44:18.376658965Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:44:18.393778 env[1205]: time="2025-11-01T00:44:18.393707695Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee\"" Nov 1 00:44:18.394471 env[1205]: time="2025-11-01T00:44:18.394399209Z" level=info msg="StartContainer for \"b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee\"" Nov 1 00:44:18.412893 systemd[1]: Started cri-containerd-b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee.scope. Nov 1 00:44:18.447520 systemd[1]: cri-containerd-b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee.scope: Deactivated successfully. Nov 1 00:44:18.449320 env[1205]: time="2025-11-01T00:44:18.449259108Z" level=info msg="StartContainer for \"b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee\" returns successfully" Nov 1 00:44:18.472723 env[1205]: time="2025-11-01T00:44:18.472641558Z" level=info msg="shim disconnected" id=b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee Nov 1 00:44:18.472723 env[1205]: time="2025-11-01T00:44:18.472702985Z" level=warning msg="cleaning up after shim disconnected" id=b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee namespace=k8s.io Nov 1 00:44:18.472723 env[1205]: time="2025-11-01T00:44:18.472712002Z" level=info msg="cleaning up dead shim" Nov 1 00:44:18.480737 env[1205]: time="2025-11-01T00:44:18.480660375Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Nov 1 00:44:18.611798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ddc637b0c7a40e54f232561a31b28659db4faa1e3d3c18910a42aa142742ee-rootfs.mount: Deactivated successfully. Nov 1 00:44:18.618231 kubelet[1960]: E1101 00:44:18.618196 1960 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:19.388669 kubelet[1960]: E1101 00:44:19.386867 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:19.402503 env[1205]: time="2025-11-01T00:44:19.399223800Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:44:19.440227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586762548.mount: Deactivated successfully. Nov 1 00:44:19.458767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397111155.mount: Deactivated successfully. Nov 1 00:44:19.462981 env[1205]: time="2025-11-01T00:44:19.462854982Z" level=info msg="CreateContainer within sandbox \"101ff801932906ec8b51aabfe03faff687bc77afbc4d86381f7d7e9687f03b6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d\"" Nov 1 00:44:19.463883 env[1205]: time="2025-11-01T00:44:19.463796054Z" level=info msg="StartContainer for \"803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d\"" Nov 1 00:44:19.519152 systemd[1]: Started cri-containerd-803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d.scope. Nov 1 00:44:19.590562 env[1205]: time="2025-11-01T00:44:19.590485301Z" level=info msg="StartContainer for \"803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d\" returns successfully" Nov 1 00:44:20.404972 kubelet[1960]: E1101 00:44:20.404063 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:20.601926 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:44:20.702455 systemd[1]: run-containerd-runc-k8s.io-803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d-runc.zs4DYJ.mount: Deactivated successfully. Nov 1 00:44:21.089858 kubelet[1960]: I1101 00:44:21.089415 1960 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:44:21Z","lastTransitionTime":"2025-11-01T00:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:44:21.707049 kubelet[1960]: E1101 00:44:21.706986 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:22.914446 systemd[1]: run-containerd-runc-k8s.io-803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d-runc.tWFySA.mount: Deactivated successfully. Nov 1 00:44:23.811061 systemd-networkd[1022]: lxc_health: Link UP Nov 1 00:44:23.833384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:44:23.833477 systemd-networkd[1022]: lxc_health: Gained carrier Nov 1 00:44:25.068527 systemd[1]: run-containerd-runc-k8s.io-803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d-runc.eic58K.mount: Deactivated successfully. Nov 1 00:44:25.708073 kubelet[1960]: E1101 00:44:25.708024 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:25.724785 kubelet[1960]: I1101 00:44:25.724706 1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fmn6x" podStartSLOduration=10.724690772 podStartE2EDuration="10.724690772s" podCreationTimestamp="2025-11-01 00:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:20.456740028 +0000 UTC m=+101.977424759" watchObservedRunningTime="2025-11-01 00:44:25.724690772 +0000 UTC m=+107.245375503" Nov 1 00:44:25.817751 systemd-networkd[1022]: lxc_health: Gained IPv6LL Nov 1 00:44:26.414553 kubelet[1960]: E1101 00:44:26.414499 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:27.416565 kubelet[1960]: E1101 00:44:27.416149 1960 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:44:29.304291 systemd[1]: run-containerd-runc-k8s.io-803ed7c9c613ac22044bc8a0795f8f7bb5f1667dc725862ccd7ea29b67fb3f9d-runc.vHUIuh.mount: Deactivated successfully. Nov 1 00:44:29.364706 sshd[3777]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:29.367008 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:33114.service: Deactivated successfully. Nov 1 00:44:29.367734 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:44:29.368311 systemd-logind[1194]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:44:29.369084 systemd-logind[1194]: Removed session 26.