Nov 1 00:39:42.994358 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:39:42.994388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:42.994402 kernel: BIOS-provided physical RAM map: Nov 1 00:39:42.994409 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:39:42.994415 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:39:42.994422 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:39:42.994430 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:39:42.994437 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:39:42.994447 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:39:42.994453 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:39:42.994460 kernel: NX (Execute Disable) protection: active Nov 1 00:39:42.994466 kernel: SMBIOS 2.8 present. Nov 1 00:39:42.994473 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:39:42.994480 kernel: Hypervisor detected: KVM Nov 1 00:39:42.994488 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:39:42.994499 kernel: kvm-clock: cpu 0, msr 771a0001, primary cpu clock Nov 1 00:39:42.994506 kernel: kvm-clock: using sched offset of 3732760018 cycles Nov 1 00:39:42.994514 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:39:42.994525 kernel: tsc: Detected 2494.140 MHz processor Nov 1 00:39:42.994532 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:39:42.994540 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:39:42.994548 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:39:42.994556 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:39:42.994566 kernel: ACPI: Early table checksum verification disabled Nov 1 00:39:42.994573 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:39:42.994581 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994588 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994595 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994602 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:39:42.994610 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994617 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994625 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994635 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:39:42.994642 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:39:42.994650 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:39:42.994657 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:39:42.994664 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:39:42.994672 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:39:42.994679 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:39:42.994686 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:39:42.994701 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:39:42.994709 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:39:42.994717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:39:42.994725 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:39:42.994733 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:39:42.994741 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:39:42.994753 kernel: Zone ranges: Nov 1 00:39:42.994760 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:39:42.994769 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:39:42.994776 kernel: Normal empty Nov 1 00:39:42.994784 kernel: Movable zone start for each node Nov 1 00:39:42.994792 kernel: Early memory node ranges Nov 1 00:39:42.994800 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:39:42.994808 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:39:42.994816 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:39:42.994827 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:39:42.994838 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:39:42.994846 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:39:42.994854 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:39:42.994862 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:39:42.994870 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:39:42.994878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:39:42.994886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:39:42.994894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:39:42.994905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:39:42.994916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:39:42.994924 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:39:42.994932 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:39:42.994940 kernel: TSC deadline timer available Nov 1 00:39:42.994949 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:39:42.994957 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:39:42.994964 kernel: Booting paravirtualized kernel on KVM Nov 1 00:39:42.994972 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:39:42.995010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:39:42.995044 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:39:42.995056 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:39:42.995064 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:39:42.995072 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Nov 1 00:39:42.995080 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:39:42.995088 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:39:42.995096 kernel: Policy zone: DMA32 Nov 1 00:39:42.995106 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:42.995118 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:39:42.995126 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:39:42.995134 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:39:42.995142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:39:42.995150 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 123076K reserved, 0K cma-reserved) Nov 1 00:39:42.995159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:39:42.995166 kernel: Kernel/User page tables isolation: enabled Nov 1 00:39:42.995175 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:39:42.995186 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:39:42.995194 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:39:42.995203 kernel: rcu: RCU event tracing is enabled. Nov 1 00:39:42.995212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:39:42.995220 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:39:42.995228 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:39:42.995236 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:39:42.995244 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:39:42.995252 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:39:42.995263 kernel: random: crng init done Nov 1 00:39:42.995271 kernel: Console: colour VGA+ 80x25 Nov 1 00:39:42.995279 kernel: printk: console [tty0] enabled Nov 1 00:39:42.995287 kernel: printk: console [ttyS0] enabled Nov 1 00:39:42.995295 kernel: ACPI: Core revision 20210730 Nov 1 00:39:42.995303 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:39:42.995311 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:39:42.995319 kernel: x2apic enabled Nov 1 00:39:42.995327 kernel: Switched APIC routing to physical x2apic. Nov 1 00:39:42.995335 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:39:42.995346 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 1 00:39:42.995355 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 1 00:39:42.995375 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:39:42.995386 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:39:42.995397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:39:42.995409 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:39:42.995420 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:39:42.995431 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:39:42.995447 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:39:42.995470 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:39:42.995484 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:39:42.995497 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:39:42.995506 kernel: active return thunk: its_return_thunk Nov 1 00:39:42.995515 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:39:42.995523 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:39:42.995545 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:39:42.995558 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:39:42.995567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:39:42.995579 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:39:42.995588 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:39:42.995597 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:39:42.995606 kernel: LSM: Security Framework initializing Nov 1 00:39:42.995615 kernel: SELinux: Initializing. Nov 1 00:39:42.995624 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:39:42.995633 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:39:42.995648 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:39:42.995662 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:39:42.995675 kernel: signal: max sigframe size: 1776 Nov 1 00:39:42.995686 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:39:42.995698 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:39:42.995710 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:39:42.995721 kernel: x86: Booting SMP configuration: Nov 1 00:39:42.995733 kernel: .... node #0, CPUs: #1 Nov 1 00:39:42.995744 kernel: kvm-clock: cpu 1, msr 771a0041, secondary cpu clock Nov 1 00:39:42.995761 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Nov 1 00:39:42.995773 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:39:42.995786 kernel: smpboot: Max logical packages: 1 Nov 1 00:39:42.995797 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 1 00:39:42.995806 kernel: devtmpfs: initialized Nov 1 00:39:42.995815 kernel: x86/mm: Memory block size: 128MB Nov 1 00:39:42.995824 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:39:42.995833 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:39:42.995842 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:39:42.995854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:39:42.995863 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:39:42.995871 kernel: audit: type=2000 audit(1761957581.829:1): state=initialized audit_enabled=0 res=1 Nov 1 00:39:42.995880 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:39:42.995889 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:39:42.995897 kernel: cpuidle: using governor menu Nov 1 00:39:42.995910 kernel: ACPI: bus type PCI registered Nov 1 00:39:42.995919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:39:42.995927 kernel: dca service started, version 1.12.1 Nov 1 00:39:42.995940 kernel: PCI: Using configuration type 1 for base access Nov 1 00:39:42.995949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:39:42.995957 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:39:42.995966 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:39:42.995974 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:39:42.996005 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:39:42.996014 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:39:42.996023 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:39:42.996032 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:39:42.996044 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:39:42.996053 kernel: ACPI: Interpreter enabled Nov 1 00:39:42.996062 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:39:42.996070 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:39:42.996079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:39:42.996088 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:39:42.996097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:39:42.996356 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:39:42.996461 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:39:42.996474 kernel: acpiphp: Slot [3] registered Nov 1 00:39:42.996482 kernel: acpiphp: Slot [4] registered Nov 1 00:39:42.996492 kernel: acpiphp: Slot [5] registered Nov 1 00:39:42.996500 kernel: acpiphp: Slot [6] registered Nov 1 00:39:42.996509 kernel: acpiphp: Slot [7] registered Nov 1 00:39:42.996517 kernel: acpiphp: Slot [8] registered Nov 1 00:39:42.996526 kernel: acpiphp: Slot [9] registered Nov 1 00:39:42.996535 kernel: acpiphp: Slot [10] registered Nov 1 00:39:42.996547 kernel: acpiphp: Slot [11] registered Nov 1 00:39:42.996555 kernel: acpiphp: Slot [12] registered Nov 1 00:39:42.996564 kernel: acpiphp: Slot [13] registered Nov 1 00:39:42.996573 kernel: acpiphp: Slot [14] registered Nov 1 00:39:42.996582 kernel: acpiphp: Slot [15] registered Nov 1 00:39:42.996590 kernel: acpiphp: Slot [16] registered Nov 1 00:39:42.996599 kernel: acpiphp: Slot [17] registered Nov 1 00:39:42.996607 kernel: acpiphp: Slot [18] registered Nov 1 00:39:42.996616 kernel: acpiphp: Slot [19] registered Nov 1 00:39:42.996627 kernel: acpiphp: Slot [20] registered Nov 1 00:39:42.996636 kernel: acpiphp: Slot [21] registered Nov 1 00:39:42.996645 kernel: acpiphp: Slot [22] registered Nov 1 00:39:42.996653 kernel: acpiphp: Slot [23] registered Nov 1 00:39:42.996662 kernel: acpiphp: Slot [24] registered Nov 1 00:39:42.996671 kernel: acpiphp: Slot [25] registered Nov 1 00:39:42.996679 kernel: acpiphp: Slot [26] registered Nov 1 00:39:42.996688 kernel: acpiphp: Slot [27] registered Nov 1 00:39:42.996697 kernel: acpiphp: Slot [28] registered Nov 1 00:39:42.996706 kernel: acpiphp: Slot [29] registered Nov 1 00:39:42.996718 kernel: acpiphp: Slot [30] registered Nov 1 00:39:42.996726 kernel: acpiphp: Slot [31] registered Nov 1 00:39:42.996735 kernel: PCI host bridge to bus 0000:00 Nov 1 00:39:42.996868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:39:42.996955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:39:42.997071 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:39:42.997150 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:39:42.997234 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:39:42.997329 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:39:42.997462 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:39:42.997567 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:39:42.997673 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:39:42.997766 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:39:42.997860 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:39:42.997951 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:39:42.998052 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:39:42.998142 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:39:42.998257 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:39:42.998363 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:39:42.998480 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:39:42.998576 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:39:42.998666 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:39:42.998778 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:39:42.998871 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:39:42.999002 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:39:43.003314 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:39:43.003477 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:39:43.003587 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:39:43.003704 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:39:43.003798 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:39:43.003893 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:39:43.004007 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:39:43.004152 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:39:43.004259 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:39:43.004356 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:39:43.004447 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:39:43.004566 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:39:43.004666 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:39:43.004762 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:39:43.004853 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:39:43.004960 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:39:43.005161 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:39:43.005253 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:39:43.005341 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:39:43.005446 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:39:43.005535 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:39:43.005624 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:39:43.005719 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:39:43.005842 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:39:43.005978 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:39:43.006130 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:39:43.006145 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:39:43.006154 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:39:43.006164 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:39:43.006180 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:39:43.006190 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:39:43.006199 kernel: iommu: Default domain type: Translated Nov 1 00:39:43.006208 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:39:43.006305 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:39:43.007204 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:39:43.007317 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:39:43.007329 kernel: vgaarb: loaded Nov 1 00:39:43.007339 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:39:43.007357 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:39:43.007366 kernel: PTP clock support registered Nov 1 00:39:43.007375 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:39:43.007384 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:39:43.007394 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:39:43.007403 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:39:43.007412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:39:43.007421 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:39:43.007430 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:39:43.007443 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:39:43.007452 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:39:43.007462 kernel: pnp: PnP ACPI init Nov 1 00:39:43.007471 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:39:43.007480 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:39:43.007489 kernel: NET: Registered PF_INET protocol family Nov 1 00:39:43.007498 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:39:43.007507 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:39:43.007519 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:39:43.007529 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:39:43.007541 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:39:43.007553 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:39:43.007563 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:39:43.007572 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:39:43.007582 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:39:43.007591 kernel: NET: Registered PF_XDP protocol family Nov 1 00:39:43.007694 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:39:43.007791 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:39:43.007881 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:39:43.007967 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:39:43.008066 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:39:43.008169 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:39:43.008282 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:39:43.008393 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:39:43.008406 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:39:43.008512 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 34482 usecs Nov 1 00:39:43.008524 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:39:43.008533 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:39:43.008542 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 1 00:39:43.008552 kernel: Initialise system trusted keyrings Nov 1 00:39:43.008561 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:39:43.008571 kernel: Key type asymmetric registered Nov 1 00:39:43.008579 kernel: Asymmetric key parser 'x509' registered Nov 1 00:39:43.008589 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:39:43.008602 kernel: io scheduler mq-deadline registered Nov 1 00:39:43.008613 kernel: io scheduler kyber registered Nov 1 00:39:43.008629 kernel: io scheduler bfq registered Nov 1 00:39:43.008640 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:39:43.008653 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:39:43.008665 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:39:43.008681 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:39:43.008692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:39:43.008705 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:39:43.008721 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:39:43.008730 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:39:43.008739 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:39:43.008748 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:39:43.008900 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:39:43.009003 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:39:43.009132 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:39:42 UTC (1761957582) Nov 1 00:39:43.009228 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:39:43.009242 kernel: intel_pstate: CPU model not supported Nov 1 00:39:43.009257 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:39:43.009269 kernel: Segment Routing with IPv6 Nov 1 00:39:43.009282 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:39:43.009294 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:39:43.009306 kernel: Key type dns_resolver registered Nov 1 00:39:43.009319 kernel: IPI shorthand broadcast: enabled Nov 1 00:39:43.009332 kernel: sched_clock: Marking stable (765816306, 142929269)->(1041976535, -133230960) Nov 1 00:39:43.009343 kernel: registered taskstats version 1 Nov 1 00:39:43.009357 kernel: Loading compiled-in X.509 certificates Nov 1 00:39:43.009366 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:39:43.009375 kernel: Key type .fscrypt registered Nov 1 00:39:43.009384 kernel: Key type fscrypt-provisioning registered Nov 1 00:39:43.009394 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:39:43.009403 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:39:43.009412 kernel: ima: No architecture policies found Nov 1 00:39:43.009420 kernel: clk: Disabling unused clocks Nov 1 00:39:43.009433 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:39:43.009441 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:39:43.009452 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:39:43.009465 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:39:43.009477 kernel: Run /init as init process Nov 1 00:39:43.009487 kernel: with arguments: Nov 1 00:39:43.009521 kernel: /init Nov 1 00:39:43.009534 kernel: with environment: Nov 1 00:39:43.009543 kernel: HOME=/ Nov 1 00:39:43.009555 kernel: TERM=linux Nov 1 00:39:43.009564 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:39:43.009578 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:39:43.009591 systemd[1]: Detected virtualization kvm. Nov 1 00:39:43.009601 systemd[1]: Detected architecture x86-64. Nov 1 00:39:43.009611 systemd[1]: Running in initrd. Nov 1 00:39:43.009621 systemd[1]: No hostname configured, using default hostname. Nov 1 00:39:43.009630 systemd[1]: Hostname set to . Nov 1 00:39:43.009643 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:39:43.009653 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:39:43.009662 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:39:43.009672 systemd[1]: Reached target cryptsetup.target. Nov 1 00:39:43.009682 systemd[1]: Reached target paths.target. Nov 1 00:39:43.009691 systemd[1]: Reached target slices.target. Nov 1 00:39:43.009701 systemd[1]: Reached target swap.target. Nov 1 00:39:43.009711 systemd[1]: Reached target timers.target. Nov 1 00:39:43.009725 systemd[1]: Listening on iscsid.socket. Nov 1 00:39:43.009735 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:39:43.009747 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:39:43.009758 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:39:43.009767 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:39:43.009777 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:39:43.009787 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:39:43.009796 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:39:43.009810 systemd[1]: Reached target sockets.target. Nov 1 00:39:43.009820 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:39:43.009833 systemd[1]: Finished network-cleanup.service. Nov 1 00:39:43.009843 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:39:43.009852 systemd[1]: Starting systemd-journald.service... Nov 1 00:39:43.009865 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:39:43.009874 systemd[1]: Starting systemd-resolved.service... Nov 1 00:39:43.009884 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:39:43.009894 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:39:43.009903 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:39:43.009922 systemd-journald[184]: Journal started Nov 1 00:39:43.010009 systemd-journald[184]: Runtime Journal (/run/log/journal/de285237522e4f00a8adfaf76d000d2d) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:39:43.003464 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:39:43.064585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:39:43.064618 kernel: Bridge firewalling registered Nov 1 00:39:43.017349 systemd-resolved[186]: Positive Trust Anchors: Nov 1 00:39:43.071428 systemd[1]: Started systemd-journald.service. Nov 1 00:39:43.071464 kernel: audit: type=1130 audit(1761957583.064:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.017359 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:39:43.017400 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:39:43.090163 kernel: audit: type=1130 audit(1761957583.071:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.090199 kernel: SCSI subsystem initialized Nov 1 00:39:43.090211 kernel: audit: type=1130 audit(1761957583.076:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.090224 kernel: audit: type=1130 audit(1761957583.078:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.020428 systemd-resolved[186]: Defaulting to hostname 'linux'. Nov 1 00:39:43.051845 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:39:43.072404 systemd[1]: Started systemd-resolved.service. Nov 1 00:39:43.078102 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:39:43.079544 systemd[1]: Reached target nss-lookup.target. Nov 1 00:39:43.081007 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:39:43.096187 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:39:43.108931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:39:43.114371 kernel: audit: type=1130 audit(1761957583.109:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.114458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:39:43.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.120182 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:39:43.122645 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:39:43.122693 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:39:43.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.127058 kernel: audit: type=1130 audit(1761957583.122:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.128394 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:39:43.133145 systemd-modules-load[185]: Inserted module 'dm_multipath' Nov 1 00:39:43.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.134593 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:39:43.147923 kernel: audit: type=1130 audit(1761957583.134:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.146323 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:39:43.152596 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:39:43.159190 kernel: audit: type=1130 audit(1761957583.152:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.159327 dracut-cmdline[202]: dracut-dracut-053 Nov 1 00:39:43.163471 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:39:43.253064 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:39:43.273022 kernel: iscsi: registered transport (tcp) Nov 1 00:39:43.300237 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:39:43.300329 kernel: QLogic iSCSI HBA Driver Nov 1 00:39:43.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.346694 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:39:43.354118 kernel: audit: type=1130 audit(1761957583.347:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.354518 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:39:43.408087 kernel: raid6: avx2x4 gen() 21404 MB/s Nov 1 00:39:43.425089 kernel: raid6: avx2x4 xor() 5416 MB/s Nov 1 00:39:43.442057 kernel: raid6: avx2x2 gen() 21493 MB/s Nov 1 00:39:43.459102 kernel: raid6: avx2x2 xor() 20255 MB/s Nov 1 00:39:43.476051 kernel: raid6: avx2x1 gen() 20163 MB/s Nov 1 00:39:43.494058 kernel: raid6: avx2x1 xor() 16121 MB/s Nov 1 00:39:43.512062 kernel: raid6: sse2x4 gen() 7558 MB/s Nov 1 00:39:43.530060 kernel: raid6: sse2x4 xor() 3927 MB/s Nov 1 00:39:43.548054 kernel: raid6: sse2x2 gen() 7533 MB/s Nov 1 00:39:43.566141 kernel: raid6: sse2x2 xor() 5431 MB/s Nov 1 00:39:43.584052 kernel: raid6: sse2x1 gen() 6486 MB/s Nov 1 00:39:43.602550 kernel: raid6: sse2x1 xor() 4593 MB/s Nov 1 00:39:43.602648 kernel: raid6: using algorithm avx2x2 gen() 21493 MB/s Nov 1 00:39:43.602662 kernel: raid6: .... xor() 20255 MB/s, rmw enabled Nov 1 00:39:43.603813 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:39:43.625027 kernel: xor: automatically using best checksumming function avx Nov 1 00:39:43.757077 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:39:43.773136 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:39:43.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.773000 audit: BPF prog-id=7 op=LOAD Nov 1 00:39:43.773000 audit: BPF prog-id=8 op=LOAD Nov 1 00:39:43.776358 systemd[1]: Starting systemd-udevd.service... Nov 1 00:39:43.791129 systemd-udevd[385]: Using default interface naming scheme 'v252'. Nov 1 00:39:43.797069 systemd[1]: Started systemd-udevd.service. Nov 1 00:39:43.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.802921 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:39:43.822634 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Nov 1 00:39:43.868276 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:39:43.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.869812 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:39:43.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:43.920761 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:39:43.992504 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:39:44.046588 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:39:44.046612 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:39:44.046755 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:39:44.046768 kernel: GPT:9289727 != 125829119 Nov 1 00:39:44.046780 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:39:44.046791 kernel: GPT:9289727 != 125829119 Nov 1 00:39:44.046802 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:39:44.046817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:39:44.050016 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:39:44.059054 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:39:44.059150 kernel: AES CTR mode by8 optimization enabled Nov 1 00:39:44.061007 kernel: libata version 3.00 loaded. Nov 1 00:39:44.070345 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:39:44.092102 kernel: scsi host1: ata_piix Nov 1 00:39:44.092303 kernel: scsi host2: ata_piix Nov 1 00:39:44.092463 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:39:44.092477 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:39:44.109042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:39:44.166786 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (443) Nov 1 00:39:44.166820 kernel: ACPI: bus type USB registered Nov 1 00:39:44.166834 kernel: usbcore: registered new interface driver usbfs Nov 1 00:39:44.166845 kernel: usbcore: registered new interface driver hub Nov 1 00:39:44.166868 kernel: usbcore: registered new device driver usb Nov 1 00:39:44.170902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:39:44.178195 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:39:44.179677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:39:44.183568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:39:44.185638 systemd[1]: Starting disk-uuid.service... Nov 1 00:39:44.192527 disk-uuid[503]: Primary Header is updated. Nov 1 00:39:44.192527 disk-uuid[503]: Secondary Entries is updated. Nov 1 00:39:44.192527 disk-uuid[503]: Secondary Header is updated. Nov 1 00:39:44.200036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:39:44.205018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:39:44.285009 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 1 00:39:44.289016 kernel: ehci-pci: EHCI PCI platform driver Nov 1 00:39:44.303025 kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 1 00:39:44.350969 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:39:44.362894 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:39:44.363141 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:39:44.363250 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Nov 1 00:39:44.363348 kernel: hub 1-0:1.0: USB hub found Nov 1 00:39:44.363486 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:39:45.215398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:39:45.215512 disk-uuid[504]: The operation has completed successfully. Nov 1 00:39:45.261911 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:39:45.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.262070 systemd[1]: Finished disk-uuid.service. Nov 1 00:39:45.264001 systemd[1]: Starting verity-setup.service... Nov 1 00:39:45.286036 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:39:45.353316 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:39:45.354878 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:39:45.359250 systemd[1]: Finished verity-setup.service. Nov 1 00:39:45.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.450034 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:39:45.450287 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:39:45.450930 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:39:45.451979 systemd[1]: Starting ignition-setup.service... Nov 1 00:39:45.453769 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:39:45.475216 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:39:45.475329 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:39:45.475363 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:39:45.502136 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:39:45.513817 systemd[1]: Finished ignition-setup.service. Nov 1 00:39:45.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.516104 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:39:45.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.623000 audit: BPF prog-id=9 op=LOAD Nov 1 00:39:45.623398 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:39:45.625909 systemd[1]: Starting systemd-networkd.service... Nov 1 00:39:45.661375 systemd-networkd[689]: lo: Link UP Nov 1 00:39:45.661389 systemd-networkd[689]: lo: Gained carrier Nov 1 00:39:45.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.664340 systemd-networkd[689]: Enumeration completed Nov 1 00:39:45.664516 systemd[1]: Started systemd-networkd.service. Nov 1 00:39:45.665088 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:39:45.665115 systemd[1]: Reached target network.target. Nov 1 00:39:45.666907 systemd[1]: Starting iscsiuio.service... Nov 1 00:39:45.673545 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:39:45.675064 systemd-networkd[689]: eth1: Link UP Nov 1 00:39:45.675071 systemd-networkd[689]: eth1: Gained carrier Nov 1 00:39:45.678618 systemd-networkd[689]: eth0: Link UP Nov 1 00:39:45.678625 systemd-networkd[689]: eth0: Gained carrier Nov 1 00:39:45.685457 ignition[624]: Ignition 2.14.0 Nov 1 00:39:45.685472 ignition[624]: Stage: fetch-offline Nov 1 00:39:45.685557 ignition[624]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:45.685588 ignition[624]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:45.690785 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:45.691065 ignition[624]: parsed url from cmdline: "" Nov 1 00:39:45.694390 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:39:45.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.691071 ignition[624]: no config URL provided Nov 1 00:39:45.695075 systemd[1]: Started iscsiuio.service. Nov 1 00:39:45.691079 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:39:45.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.696766 systemd-networkd[689]: eth0: DHCPv4 address 143.198.72.73/20, gateway 143.198.64.1 acquired from 169.254.169.253 Nov 1 00:39:45.691095 ignition[624]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:39:45.697650 systemd[1]: Starting ignition-fetch.service... Nov 1 00:39:45.691104 ignition[624]: failed to fetch config: resource requires networking Nov 1 00:39:45.706291 systemd[1]: Starting iscsid.service... Nov 1 00:39:45.720942 iscsid[701]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:39:45.720942 iscsid[701]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:39:45.720942 iscsid[701]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:39:45.720942 iscsid[701]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:39:45.720942 iscsid[701]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:39:45.720942 iscsid[701]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:39:45.720942 iscsid[701]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:39:45.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.691260 ignition[624]: Ignition finished successfully Nov 1 00:39:45.707212 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 Nov 1 00:39:45.710297 ignition[695]: Ignition 2.14.0 Nov 1 00:39:45.720042 systemd[1]: Started iscsid.service. Nov 1 00:39:45.710305 ignition[695]: Stage: fetch Nov 1 00:39:45.722651 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:39:45.710465 ignition[695]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:45.710493 ignition[695]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:45.714327 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:45.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.714492 ignition[695]: parsed url from cmdline: "" Nov 1 00:39:45.737923 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:39:45.714498 ignition[695]: no config URL provided Nov 1 00:39:45.738742 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:39:45.714506 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:39:45.739324 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:39:45.714539 ignition[695]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:39:45.739860 systemd[1]: Reached target remote-fs.target. Nov 1 00:39:45.714586 ignition[695]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:39:45.742253 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:39:45.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.754441 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:39:45.758709 ignition[695]: GET result: OK Nov 1 00:39:45.758819 ignition[695]: parsing config with SHA512: 8ae5fffefa9f0f8ca6b8e1c34ebb8c0d863d0107f77ae822768f19a34a42b5010c3386f690f83747c2725696a1e51adbf427c16b1edbff95422ae77428e3b7f8 Nov 1 00:39:45.769343 unknown[695]: fetched base config from "system" Nov 1 00:39:45.770160 unknown[695]: fetched base config from "system" Nov 1 00:39:45.770859 unknown[695]: fetched user config from "digitalocean" Nov 1 00:39:45.772010 ignition[695]: fetch: fetch complete Nov 1 00:39:45.772562 ignition[695]: fetch: fetch passed Nov 1 00:39:45.773363 ignition[695]: Ignition finished successfully Nov 1 00:39:45.775962 systemd[1]: Finished ignition-fetch.service. Nov 1 00:39:45.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.777905 systemd[1]: Starting ignition-kargs.service... Nov 1 00:39:45.789787 ignition[715]: Ignition 2.14.0 Nov 1 00:39:45.790685 ignition[715]: Stage: kargs Nov 1 00:39:45.791372 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:45.792039 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:45.793909 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:45.796018 ignition[715]: kargs: kargs passed Nov 1 00:39:45.796685 ignition[715]: Ignition finished successfully Nov 1 00:39:45.798603 systemd[1]: Finished ignition-kargs.service. Nov 1 00:39:45.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.800516 systemd[1]: Starting ignition-disks.service... Nov 1 00:39:45.817016 ignition[721]: Ignition 2.14.0 Nov 1 00:39:45.817035 ignition[721]: Stage: disks Nov 1 00:39:45.817247 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:45.817278 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:45.820016 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:45.821866 ignition[721]: disks: disks passed Nov 1 00:39:45.823203 systemd[1]: Finished ignition-disks.service. Nov 1 00:39:45.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.821942 ignition[721]: Ignition finished successfully Nov 1 00:39:45.824449 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:39:45.825154 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:39:45.826002 systemd[1]: Reached target local-fs.target. Nov 1 00:39:45.826911 systemd[1]: Reached target sysinit.target. Nov 1 00:39:45.827864 systemd[1]: Reached target basic.target. Nov 1 00:39:45.830042 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:39:45.851679 systemd-fsck[729]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:39:45.855767 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:39:45.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:45.858476 systemd[1]: Mounting sysroot.mount... Nov 1 00:39:45.881046 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:39:45.881622 systemd[1]: Mounted sysroot.mount. Nov 1 00:39:45.883110 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:39:45.886245 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:39:45.889103 systemd[1]: Starting flatcar-digitalocean-network.service... Nov 1 00:39:45.892263 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:39:45.894121 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:39:45.895422 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:39:45.898670 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:39:45.902609 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:39:45.912396 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:39:45.935514 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:39:45.942515 initrd-setup-root[759]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:39:45.957405 initrd-setup-root[769]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:39:46.030929 coreos-metadata[736]: Nov 01 00:39:46.030 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:39:46.044044 coreos-metadata[735]: Nov 01 00:39:46.043 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:39:46.045414 coreos-metadata[736]: Nov 01 00:39:46.045 INFO Fetch successful Nov 1 00:39:46.052852 coreos-metadata[736]: Nov 01 00:39:46.052 INFO wrote hostname ci-3510.3.8-n-39b63463e5 to /sysroot/etc/hostname Nov 1 00:39:46.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.054237 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:39:46.056175 systemd[1]: Starting ignition-mount.service... Nov 1 00:39:46.062501 coreos-metadata[735]: Nov 01 00:39:46.058 INFO Fetch successful Nov 1 00:39:46.064570 systemd[1]: Starting sysroot-boot.service... Nov 1 00:39:46.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.065777 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:39:46.065910 systemd[1]: Finished flatcar-digitalocean-network.service. Nov 1 00:39:46.066893 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:39:46.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.074378 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:39:46.093747 systemd[1]: Finished sysroot-boot.service. Nov 1 00:39:46.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.095973 ignition[788]: INFO : Ignition 2.14.0 Nov 1 00:39:46.095973 ignition[788]: INFO : Stage: mount Nov 1 00:39:46.097190 ignition[788]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:46.097190 ignition[788]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:46.098694 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:46.101102 ignition[788]: INFO : mount: mount passed Nov 1 00:39:46.101102 ignition[788]: INFO : Ignition finished successfully Nov 1 00:39:46.102318 systemd[1]: Finished ignition-mount.service. Nov 1 00:39:46.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:46.381223 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:39:46.393028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Nov 1 00:39:46.396746 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:39:46.396852 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:39:46.396866 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:39:46.409176 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:39:46.411058 systemd[1]: Starting ignition-files.service... Nov 1 00:39:46.436957 ignition[816]: INFO : Ignition 2.14.0 Nov 1 00:39:46.436957 ignition[816]: INFO : Stage: files Nov 1 00:39:46.438415 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:46.438415 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:46.439962 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:46.446861 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:39:46.447899 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:39:46.447899 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:39:46.450863 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:39:46.451836 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:39:46.453576 unknown[816]: wrote ssh authorized keys file for user: core Nov 1 00:39:46.454907 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:39:46.456194 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:39:46.457259 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:39:46.491871 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:39:46.558063 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:39:46.559191 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:39:46.559191 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:39:46.758385 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:39:46.856528 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:39:46.856528 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:39:46.858465 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:39:47.070269 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:39:47.129295 systemd-networkd[689]: eth0: Gained IPv6LL Nov 1 00:39:47.321173 systemd-networkd[689]: eth1: Gained IPv6LL Nov 1 00:39:47.479791 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:39:47.479791 ignition[816]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:39:47.479791 ignition[816]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:39:47.479791 ignition[816]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:39:47.483996 ignition[816]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:39:47.497562 kernel: kauditd_printk_skb: 28 callbacks suppressed Nov 1 00:39:47.497595 kernel: audit: type=1130 audit(1761957587.489:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.488784 systemd[1]: Finished ignition-files.service. Nov 1 00:39:47.499546 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:39:47.499546 ignition[816]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:39:47.499546 ignition[816]: INFO : files: files passed Nov 1 00:39:47.499546 ignition[816]: INFO : Ignition finished successfully Nov 1 00:39:47.492719 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:39:47.498823 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:39:47.522265 kernel: audit: type=1130 audit(1761957587.507:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.522320 kernel: audit: type=1131 audit(1761957587.509:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.522343 kernel: audit: type=1130 audit(1761957587.516:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.522534 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:39:47.500781 systemd[1]: Starting ignition-quench.service... Nov 1 00:39:47.507371 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:39:47.507502 systemd[1]: Finished ignition-quench.service. Nov 1 00:39:47.510629 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:39:47.517872 systemd[1]: Reached target ignition-complete.target. Nov 1 00:39:47.524428 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:39:47.552697 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:39:47.552875 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:39:47.561477 kernel: audit: type=1130 audit(1761957587.553:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.561527 kernel: audit: type=1131 audit(1761957587.553:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.554248 systemd[1]: Reached target initrd-fs.target. Nov 1 00:39:47.561820 systemd[1]: Reached target initrd.target. Nov 1 00:39:47.562824 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:39:47.564377 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:39:47.580320 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:39:47.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.584254 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:39:47.588978 kernel: audit: type=1130 audit(1761957587.580:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.596071 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:39:47.597341 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:39:47.598455 systemd[1]: Stopped target timers.target. Nov 1 00:39:47.599584 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:39:47.600328 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:39:47.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.611057 kernel: audit: type=1131 audit(1761957587.600:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.611387 systemd[1]: Stopped target initrd.target. Nov 1 00:39:47.612035 systemd[1]: Stopped target basic.target. Nov 1 00:39:47.612804 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:39:47.613659 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:39:47.614599 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:39:47.615569 systemd[1]: Stopped target remote-fs.target. Nov 1 00:39:47.616365 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:39:47.617323 systemd[1]: Stopped target sysinit.target. Nov 1 00:39:47.618137 systemd[1]: Stopped target local-fs.target. Nov 1 00:39:47.619150 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:39:47.620265 systemd[1]: Stopped target swap.target. Nov 1 00:39:47.621227 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:39:47.626288 kernel: audit: type=1131 audit(1761957587.621:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.621374 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:39:47.622327 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:39:47.626838 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:39:47.632276 kernel: audit: type=1131 audit(1761957587.628:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.627290 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:39:47.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.628639 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:39:47.628806 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:39:47.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.633045 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:39:47.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.633243 systemd[1]: Stopped ignition-files.service. Nov 1 00:39:47.634546 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:39:47.634747 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:39:47.637058 systemd[1]: Stopping ignition-mount.service... Nov 1 00:39:47.638230 iscsid[701]: iscsid shutting down. Nov 1 00:39:47.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.643451 systemd[1]: Stopping iscsid.service... Nov 1 00:39:47.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.645391 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:39:47.645926 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:39:47.646187 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:39:47.646851 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:39:47.647019 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:39:47.653965 ignition[854]: INFO : Ignition 2.14.0 Nov 1 00:39:47.653965 ignition[854]: INFO : Stage: umount Nov 1 00:39:47.653965 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:39:47.653965 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:39:47.650138 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:39:47.650293 systemd[1]: Stopped iscsid.service. Nov 1 00:39:47.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.661569 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:39:47.661688 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:39:47.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.664004 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:39:47.664505 systemd[1]: Stopping iscsiuio.service... Nov 1 00:39:47.668923 ignition[854]: INFO : umount: umount passed Nov 1 00:39:47.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.676711 ignition[854]: INFO : Ignition finished successfully Nov 1 00:39:47.669313 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:39:47.669440 systemd[1]: Stopped iscsiuio.service. Nov 1 00:39:47.671253 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:39:47.671409 systemd[1]: Stopped ignition-mount.service. Nov 1 00:39:47.672204 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:39:47.672283 systemd[1]: Stopped ignition-disks.service. Nov 1 00:39:47.672908 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:39:47.672967 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:39:47.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.673567 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:39:47.673633 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:39:47.674200 systemd[1]: Stopped target network.target. Nov 1 00:39:47.674856 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:39:47.674929 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:39:47.675646 systemd[1]: Stopped target paths.target. Nov 1 00:39:47.676209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:39:47.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.680232 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:39:47.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.680852 systemd[1]: Stopped target slices.target. Nov 1 00:39:47.681993 systemd[1]: Stopped target sockets.target. Nov 1 00:39:47.683143 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:39:47.683198 systemd[1]: Closed iscsid.socket. Nov 1 00:39:47.684195 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:39:47.684238 systemd[1]: Closed iscsiuio.socket. Nov 1 00:39:47.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.685218 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:39:47.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.685303 systemd[1]: Stopped ignition-setup.service. Nov 1 00:39:47.687379 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:39:47.699000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:39:47.687884 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:39:47.689059 systemd-networkd[689]: eth0: DHCPv6 lease lost Nov 1 00:39:47.689953 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:39:47.690551 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:39:47.690662 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:39:47.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.692134 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:39:47.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.692188 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:39:47.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.693126 systemd-networkd[689]: eth1: DHCPv6 lease lost Nov 1 00:39:47.712000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:39:47.694534 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:39:47.694636 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:39:47.697259 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:39:47.697357 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:39:47.698720 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:39:47.698763 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:39:47.700823 systemd[1]: Stopping network-cleanup.service... Nov 1 00:39:47.701626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:39:47.701755 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:39:47.705150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:39:47.705216 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:39:47.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.706155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:39:47.706201 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:39:47.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.709532 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:39:47.714834 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:39:47.718921 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:39:47.719097 systemd[1]: Stopped network-cleanup.service. Nov 1 00:39:47.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.722356 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:39:47.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.722520 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:39:47.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.724183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:39:47.724228 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:39:47.725107 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:39:47.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.725146 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:39:47.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:47.726061 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:39:47.726118 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:39:47.727057 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:39:47.727106 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:39:47.727849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:39:47.727903 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:39:47.729851 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:39:47.730413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:39:47.730476 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:39:47.731673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:39:47.731756 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:39:47.738673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:39:47.738743 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:39:47.740504 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:39:47.741035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:39:47.741135 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:39:47.741841 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:39:47.743683 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:39:47.761541 systemd[1]: Switching root. Nov 1 00:39:47.781731 systemd-journald[184]: Journal stopped Nov 1 00:39:51.378581 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 1 00:39:51.378709 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:39:51.378742 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:39:51.378773 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:39:51.378795 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:39:51.378823 kernel: SELinux: policy capability open_perms=1 Nov 1 00:39:51.378846 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:39:51.378867 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:39:51.378889 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:39:51.378919 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:39:51.378940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:39:51.378956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:39:51.379009 systemd[1]: Successfully loaded SELinux policy in 52.271ms. Nov 1 00:39:51.379044 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.043ms. Nov 1 00:39:51.381102 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:39:51.381123 systemd[1]: Detected virtualization kvm. Nov 1 00:39:51.381139 systemd[1]: Detected architecture x86-64. Nov 1 00:39:51.381153 systemd[1]: Detected first boot. Nov 1 00:39:51.381167 systemd[1]: Hostname set to . Nov 1 00:39:51.381182 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:39:51.381206 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:39:51.381222 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:39:51.381245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:39:51.381272 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:39:51.381289 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:39:51.381305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:39:51.381321 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:39:51.381339 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:39:51.381354 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:39:51.381371 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:39:51.381393 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:39:51.381415 systemd[1]: Created slice system-getty.slice. Nov 1 00:39:51.381443 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:39:51.381470 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:39:51.381492 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:39:51.381511 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:39:51.381542 systemd[1]: Created slice user.slice. Nov 1 00:39:51.381558 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:39:51.381572 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:39:51.381588 systemd[1]: Set up automount boot.automount. Nov 1 00:39:51.381604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:39:51.381619 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:39:51.381639 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:39:51.381653 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:39:51.381669 systemd[1]: Reached target integritysetup.target. Nov 1 00:39:51.381683 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:39:51.381698 systemd[1]: Reached target remote-fs.target. Nov 1 00:39:51.381712 systemd[1]: Reached target slices.target. Nov 1 00:39:51.381727 systemd[1]: Reached target swap.target. Nov 1 00:39:51.381741 systemd[1]: Reached target torcx.target. Nov 1 00:39:51.381756 systemd[1]: Reached target veritysetup.target. Nov 1 00:39:51.381774 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:39:51.381790 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:39:51.381811 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:39:51.381826 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:39:51.381840 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:39:51.381856 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:39:51.381872 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:39:51.381888 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:39:51.381903 systemd[1]: Mounting media.mount... Nov 1 00:39:51.381917 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:51.381935 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:39:51.381950 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:39:51.381965 systemd[1]: Mounting tmp.mount... Nov 1 00:39:51.382113 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:39:51.382137 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:51.382152 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:39:51.382168 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:39:51.382183 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:51.382197 systemd[1]: Starting modprobe@drm.service... Nov 1 00:39:51.382216 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:51.382231 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:39:51.382246 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:51.382263 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:39:51.382278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:39:51.382293 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:39:51.382310 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:39:51.382324 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:39:51.382340 systemd[1]: Stopped systemd-journald.service. Nov 1 00:39:51.382357 systemd[1]: Starting systemd-journald.service... Nov 1 00:39:51.382372 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:39:51.382386 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:39:51.382402 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:39:51.382417 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:39:51.382432 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:39:51.382446 kernel: loop: module loaded Nov 1 00:39:51.382479 systemd[1]: Stopped verity-setup.service. Nov 1 00:39:51.382494 kernel: fuse: init (API version 7.34) Nov 1 00:39:51.382512 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:51.382527 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:39:51.382541 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:39:51.382555 systemd[1]: Mounted media.mount. Nov 1 00:39:51.382569 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:39:51.382583 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:39:51.382597 systemd[1]: Mounted tmp.mount. Nov 1 00:39:51.382611 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:39:51.382625 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:39:51.382643 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:39:51.382658 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:39:51.382672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:51.382686 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:51.382700 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:39:51.382718 systemd[1]: Finished modprobe@drm.service. Nov 1 00:39:51.382733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:51.382748 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:51.382762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:39:51.382777 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:39:51.382797 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:51.382818 systemd-journald[963]: Journal started Nov 1 00:39:51.382893 systemd-journald[963]: Runtime Journal (/run/log/journal/de285237522e4f00a8adfaf76d000d2d) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:39:47.921000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:39:47.983000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:47.983000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:39:47.983000 audit: BPF prog-id=10 op=LOAD Nov 1 00:39:47.983000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:39:47.983000 audit: BPF prog-id=11 op=LOAD Nov 1 00:39:47.983000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:39:48.095000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:39:48.095000 audit[886]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:48.095000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:39:48.097000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:39:48.097000 audit[886]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:48.097000 audit: CWD cwd="/" Nov 1 00:39:48.097000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:48.097000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:48.097000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:39:51.384669 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:51.107000 audit: BPF prog-id=12 op=LOAD Nov 1 00:39:51.107000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:39:51.107000 audit: BPF prog-id=13 op=LOAD Nov 1 00:39:51.107000 audit: BPF prog-id=14 op=LOAD Nov 1 00:39:51.107000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:39:51.107000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:39:51.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.117000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:39:51.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.275000 audit: BPF prog-id=15 op=LOAD Nov 1 00:39:51.275000 audit: BPF prog-id=16 op=LOAD Nov 1 00:39:51.275000 audit: BPF prog-id=17 op=LOAD Nov 1 00:39:51.275000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:39:51.275000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:39:51.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.376000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:39:51.376000 audit[963]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffea7359c0 a2=4000 a3=7fffea735a5c items=0 ppid=1 pid=963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:51.376000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:39:51.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.104545 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:39:48.092492 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:39:51.387899 systemd[1]: Started systemd-journald.service. Nov 1 00:39:51.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.104566 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:39:48.093068 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:39:51.109485 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:39:48.093091 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:39:48.093132 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:39:48.093142 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:39:48.093184 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:39:48.093198 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:39:48.093486 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:39:48.093552 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:39:48.093570 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:39:48.095143 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:39:48.095185 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:39:48.095207 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:39:48.095222 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:39:51.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:48.095244 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:39:51.390146 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:39:48.095258 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:39:50.568639 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:50.569166 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:50.569363 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:50.569643 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:39:50.569722 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:39:50.569812 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-11-01T00:39:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:39:51.392724 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:39:51.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.394319 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:39:51.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.396054 systemd[1]: Reached target network-pre.target. Nov 1 00:39:51.398791 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:39:51.401727 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:39:51.403601 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:39:51.409368 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:39:51.411662 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:39:51.412368 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:51.413898 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:39:51.414779 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:51.416545 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:39:51.419952 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:39:51.422832 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:39:51.426724 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:39:51.441082 systemd-journald[963]: Time spent on flushing to /var/log/journal/de285237522e4f00a8adfaf76d000d2d is 83.793ms for 1151 entries. Nov 1 00:39:51.441082 systemd-journald[963]: System Journal (/var/log/journal/de285237522e4f00a8adfaf76d000d2d) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:39:51.534291 systemd-journald[963]: Received client request to flush runtime journal. Nov 1 00:39:51.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.449179 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:39:51.449839 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:39:51.453995 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:39:51.473647 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:39:51.476058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:39:51.513895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:39:51.535539 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:39:51.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.546281 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:39:51.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:51.548317 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:39:51.560860 udevadm[998]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:39:52.148227 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:39:52.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.148000 audit: BPF prog-id=18 op=LOAD Nov 1 00:39:52.148000 audit: BPF prog-id=19 op=LOAD Nov 1 00:39:52.148000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:39:52.148000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:39:52.150548 systemd[1]: Starting systemd-udevd.service... Nov 1 00:39:52.175123 systemd-udevd[999]: Using default interface naming scheme 'v252'. Nov 1 00:39:52.209704 systemd[1]: Started systemd-udevd.service. Nov 1 00:39:52.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.211000 audit: BPF prog-id=20 op=LOAD Nov 1 00:39:52.213254 systemd[1]: Starting systemd-networkd.service... Nov 1 00:39:52.221000 audit: BPF prog-id=21 op=LOAD Nov 1 00:39:52.221000 audit: BPF prog-id=22 op=LOAD Nov 1 00:39:52.221000 audit: BPF prog-id=23 op=LOAD Nov 1 00:39:52.223642 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:39:52.274727 systemd[1]: Started systemd-userdbd.service. Nov 1 00:39:52.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.285357 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:39:52.306371 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:52.306648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:52.308190 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:52.311208 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:52.313497 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:52.315134 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:39:52.315189 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:39:52.315260 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:52.315816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:52.315995 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:52.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.318084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:52.318231 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:52.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.320452 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:52.320621 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:52.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.321395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:52.321444 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:52.394448 systemd-networkd[1005]: lo: Link UP Nov 1 00:39:52.394458 systemd-networkd[1005]: lo: Gained carrier Nov 1 00:39:52.395015 systemd-networkd[1005]: Enumeration completed Nov 1 00:39:52.395139 systemd[1]: Started systemd-networkd.service. Nov 1 00:39:52.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.396082 systemd-networkd[1005]: eth1: Configuring with /run/systemd/network/10-8a:c7:10:ea:e7:0b.network. Nov 1 00:39:52.397272 systemd-networkd[1005]: eth0: Configuring with /run/systemd/network/10-42:82:7c:2b:04:cb.network. Nov 1 00:39:52.398283 systemd-networkd[1005]: eth1: Link UP Nov 1 00:39:52.398292 systemd-networkd[1005]: eth1: Gained carrier Nov 1 00:39:52.402355 systemd-networkd[1005]: eth0: Link UP Nov 1 00:39:52.402363 systemd-networkd[1005]: eth0: Gained carrier Nov 1 00:39:52.417456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:39:52.418118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:39:52.427015 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:39:52.404000 audit[1011]: AVC avc: denied { confidentiality } for pid=1011 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:39:52.404000 audit[1011]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bddb756730 a1=338ec a2=7ff87d88ebc5 a3=5 items=110 ppid=999 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:52.404000 audit: CWD cwd="/" Nov 1 00:39:52.404000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=1 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=2 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=3 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=4 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=5 name=(null) inode=14514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=6 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=7 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=8 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=9 name=(null) inode=14516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=10 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=11 name=(null) inode=14517 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=12 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=13 name=(null) inode=14518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=14 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=15 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=16 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=17 name=(null) inode=14520 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=18 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=19 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=20 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=21 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=22 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=23 name=(null) inode=14523 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.463057 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:39:52.404000 audit: PATH item=24 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=25 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=26 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=27 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=28 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=29 name=(null) inode=14526 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=30 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=31 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=32 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=33 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=34 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=35 name=(null) inode=14529 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=36 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=37 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=38 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=39 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=40 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=41 name=(null) inode=14532 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=42 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=43 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=44 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=45 name=(null) inode=14534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=46 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=47 name=(null) inode=14535 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=48 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=49 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=50 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=51 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=52 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=53 name=(null) inode=14538 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=55 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=56 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=57 name=(null) inode=14540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=58 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=59 name=(null) inode=14541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=60 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=61 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=62 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=63 name=(null) inode=14543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=64 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=65 name=(null) inode=14544 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=66 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=67 name=(null) inode=14545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=68 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=69 name=(null) inode=14546 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=70 name=(null) inode=14542 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=71 name=(null) inode=14547 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=72 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=73 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=74 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=75 name=(null) inode=14549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=76 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=77 name=(null) inode=14550 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=78 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=79 name=(null) inode=14551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=80 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=81 name=(null) inode=14552 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=82 name=(null) inode=14548 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=83 name=(null) inode=14553 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=84 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=85 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=86 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=87 name=(null) inode=14555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=88 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=89 name=(null) inode=14556 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=90 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=91 name=(null) inode=14557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=92 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=93 name=(null) inode=14558 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=94 name=(null) inode=14554 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=95 name=(null) inode=14559 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=96 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=97 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=98 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=99 name=(null) inode=14561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=100 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=101 name=(null) inode=14562 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=102 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=103 name=(null) inode=14563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=104 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=105 name=(null) inode=14564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=106 name=(null) inode=14560 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=107 name=(null) inode=14565 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PATH item=109 name=(null) inode=14566 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:39:52.404000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:39:52.490021 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:39:52.494015 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:39:52.593035 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:39:52.634738 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:39:52.638092 kernel: kauditd_printk_skb: 229 callbacks suppressed Nov 1 00:39:52.638202 kernel: audit: type=1130 audit(1761957592.634:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.637335 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:39:52.661004 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:39:52.686477 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:39:52.687273 systemd[1]: Reached target cryptsetup.target. Nov 1 00:39:52.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.688004 kernel: audit: type=1130 audit(1761957592.686:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.693763 systemd[1]: Starting lvm2-activation.service... Nov 1 00:39:52.699258 lvm[1038]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:39:52.725536 systemd[1]: Finished lvm2-activation.service. Nov 1 00:39:52.726351 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:39:52.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.727005 kernel: audit: type=1130 audit(1761957592.726:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.733357 systemd[1]: Mounting media-configdrive.mount... Nov 1 00:39:52.733974 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:39:52.734098 systemd[1]: Reached target machines.target. Nov 1 00:39:52.735962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:39:52.753329 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:39:52.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.759127 kernel: audit: type=1130 audit(1761957592.752:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.761047 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:39:52.762176 systemd[1]: Mounted media-configdrive.mount. Nov 1 00:39:52.762744 systemd[1]: Reached target local-fs.target. Nov 1 00:39:52.765012 systemd[1]: Starting ldconfig.service... Nov 1 00:39:52.766181 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:52.766230 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:52.768148 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:39:52.772290 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:39:52.774355 systemd[1]: Starting systemd-sysext.service... Nov 1 00:39:52.778959 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Nov 1 00:39:52.780861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:39:52.803654 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:39:52.814183 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:39:52.814384 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:39:52.836056 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 00:39:52.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.881537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:39:52.882204 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:39:52.888036 kernel: audit: type=1130 audit(1761957592.881:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.912306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:39:52.932032 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 00:39:52.935488 systemd-fsck[1051]: fsck.fat 4.2 (2021-01-31) Nov 1 00:39:52.935488 systemd-fsck[1051]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:39:52.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.938517 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:39:52.942147 systemd[1]: Mounting boot.mount... Nov 1 00:39:52.947102 kernel: audit: type=1130 audit(1761957592.939:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.958154 (sd-sysext)[1055]: Using extensions 'kubernetes'. Nov 1 00:39:52.959584 (sd-sysext)[1055]: Merged extensions into '/usr'. Nov 1 00:39:52.972128 systemd[1]: Mounted boot.mount. Nov 1 00:39:52.984506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:52.986235 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:39:52.986994 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:52.991253 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:52.993539 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:52.996552 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:52.999219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:52.999373 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:52.999485 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:53.000403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:53.000536 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:53.006087 kernel: audit: type=1130 audit(1761957593.000:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.006201 kernel: audit: type=1131 audit(1761957593.004:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.010186 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:53.010340 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:53.011523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:53.011659 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:53.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.016487 kernel: audit: type=1130 audit(1761957593.009:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.016628 kernel: audit: type=1131 audit(1761957593.009:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.025506 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:39:53.026661 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:53.026773 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.029625 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:39:53.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.030622 systemd[1]: Finished systemd-sysext.service. Nov 1 00:39:53.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.034005 systemd[1]: Starting ensure-sysext.service... Nov 1 00:39:53.040454 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:39:53.046604 systemd[1]: Reloading. Nov 1 00:39:53.067463 systemd-tmpfiles[1063]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:39:53.071617 systemd-tmpfiles[1063]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:39:53.075296 systemd-tmpfiles[1063]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:39:53.193817 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-11-01T00:39:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:39:53.197280 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-11-01T00:39:53Z" level=info msg="torcx already run" Nov 1 00:39:53.256631 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:39:53.345956 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:39:53.346027 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:39:53.380270 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:39:53.457000 audit: BPF prog-id=24 op=LOAD Nov 1 00:39:53.457000 audit: BPF prog-id=25 op=LOAD Nov 1 00:39:53.457000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:39:53.457000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:39:53.459000 audit: BPF prog-id=26 op=LOAD Nov 1 00:39:53.460000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:39:53.460000 audit: BPF prog-id=27 op=LOAD Nov 1 00:39:53.460000 audit: BPF prog-id=28 op=LOAD Nov 1 00:39:53.460000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:39:53.460000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:39:53.461000 audit: BPF prog-id=29 op=LOAD Nov 1 00:39:53.461000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:39:53.461000 audit: BPF prog-id=30 op=LOAD Nov 1 00:39:53.462000 audit: BPF prog-id=31 op=LOAD Nov 1 00:39:53.462000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:39:53.462000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:39:53.466000 audit: BPF prog-id=32 op=LOAD Nov 1 00:39:53.466000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:39:53.471390 systemd[1]: Finished ldconfig.service. Nov 1 00:39:53.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.474348 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:39:53.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.489119 systemd[1]: Starting audit-rules.service... Nov 1 00:39:53.491684 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:39:53.495118 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:39:53.499000 audit: BPF prog-id=33 op=LOAD Nov 1 00:39:53.503323 systemd[1]: Starting systemd-resolved.service... Nov 1 00:39:53.503000 audit: BPF prog-id=34 op=LOAD Nov 1 00:39:53.506736 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:39:53.509671 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:39:53.512753 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:39:53.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.517364 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.520292 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:53.523349 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:53.525000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.527892 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:53.528523 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.528761 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:53.528932 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:39:53.531310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:53.531505 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:53.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.536838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:53.536980 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:53.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.538054 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:53.538196 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:53.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.542012 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.544126 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:53.546911 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:53.551298 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:53.551897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.552156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:53.552376 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:39:53.553886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:53.554710 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:53.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.555872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:53.557105 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:53.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.558369 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:53.558538 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:53.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.561462 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:39:53.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.566874 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.568763 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:39:53.571653 systemd[1]: Starting modprobe@drm.service... Nov 1 00:39:53.574570 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:39:53.578905 systemd[1]: Starting modprobe@loop.service... Nov 1 00:39:53.579937 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.580329 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:53.583652 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:39:53.585679 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:39:53.587770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:39:53.588321 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:39:53.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.591937 systemd[1]: Finished ensure-sysext.service. Nov 1 00:39:53.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.603816 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:39:53.604066 systemd[1]: Finished modprobe@drm.service. Nov 1 00:39:53.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.605575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:39:53.605769 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:39:53.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.606791 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:39:53.607243 systemd[1]: Finished modprobe@loop.service. Nov 1 00:39:53.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.608094 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:39:53.608167 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.616005 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:39:53.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.618592 systemd[1]: Starting systemd-update-done.service... Nov 1 00:39:53.630160 systemd[1]: Finished systemd-update-done.service. Nov 1 00:39:53.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:39:53.645000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:39:53.645000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe08e6f190 a2=420 a3=0 items=0 ppid=1130 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:39:53.645000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:39:53.647332 augenrules[1162]: No rules Nov 1 00:39:53.648466 systemd[1]: Finished audit-rules.service. Nov 1 00:39:53.665042 systemd-resolved[1136]: Positive Trust Anchors: Nov 1 00:39:53.665610 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:39:53.665767 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:39:53.671546 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:39:53.672275 systemd[1]: Reached target time-set.target. Nov 1 00:39:53.674141 systemd-resolved[1136]: Using system hostname 'ci-3510.3.8-n-39b63463e5'. Nov 1 00:39:53.678550 systemd[1]: Started systemd-resolved.service. Nov 1 00:39:53.679245 systemd[1]: Reached target network.target. Nov 1 00:39:53.679729 systemd[1]: Reached target nss-lookup.target. Nov 1 00:39:53.680234 systemd[1]: Reached target sysinit.target. Nov 1 00:39:53.680853 systemd[1]: Started motdgen.path. Nov 1 00:39:53.681451 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:39:53.682231 systemd[1]: Started logrotate.timer. Nov 1 00:39:53.682736 systemd[1]: Started mdadm.timer. Nov 1 00:39:53.683212 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:39:53.683648 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:39:53.683679 systemd[1]: Reached target paths.target. Nov 1 00:39:53.684092 systemd[1]: Reached target timers.target. Nov 1 00:39:53.684867 systemd[1]: Listening on dbus.socket. Nov 1 00:39:53.685410 systemd-timesyncd[1137]: Contacted time server 104.131.155.175:123 (0.flatcar.pool.ntp.org). Nov 1 00:39:53.685480 systemd-timesyncd[1137]: Initial clock synchronization to Sat 2025-11-01 00:39:53.945421 UTC. Nov 1 00:39:53.686811 systemd[1]: Starting docker.socket... Nov 1 00:39:53.691487 systemd[1]: Listening on sshd.socket. Nov 1 00:39:53.692474 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:53.693095 systemd[1]: Listening on docker.socket. Nov 1 00:39:53.693925 systemd[1]: Reached target sockets.target. Nov 1 00:39:53.694656 systemd[1]: Reached target basic.target. Nov 1 00:39:53.695507 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.695549 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:39:53.697279 systemd[1]: Starting containerd.service... Nov 1 00:39:53.700505 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:39:53.703452 systemd[1]: Starting dbus.service... Nov 1 00:39:53.710864 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:39:53.714795 systemd[1]: Starting extend-filesystems.service... Nov 1 00:39:53.715606 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:39:53.729636 jq[1175]: false Nov 1 00:39:53.718091 systemd[1]: Starting motdgen.service... Nov 1 00:39:53.722704 systemd[1]: Starting prepare-helm.service... Nov 1 00:39:53.725689 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:39:53.730392 systemd[1]: Starting sshd-keygen.service... Nov 1 00:39:53.736775 systemd[1]: Starting systemd-logind.service... Nov 1 00:39:53.737350 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:39:53.785294 jq[1187]: true Nov 1 00:39:53.737503 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:39:53.738805 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:39:53.740582 systemd[1]: Starting update-engine.service... Nov 1 00:39:53.746218 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:39:53.752162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:39:53.752389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:39:53.753706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:39:53.753956 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:39:53.763675 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:53.763714 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:39:53.794217 systemd[1]: Started dbus.service. Nov 1 00:39:53.794003 dbus-daemon[1172]: [system] SELinux support is enabled Nov 1 00:39:53.797228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:39:53.797259 systemd[1]: Reached target system-config.target. Nov 1 00:39:53.797797 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:39:53.797824 systemd[1]: Reached target user-config.target. Nov 1 00:39:53.806415 tar[1190]: linux-amd64/LICENSE Nov 1 00:39:53.806415 tar[1190]: linux-amd64/helm Nov 1 00:39:53.813091 jq[1196]: true Nov 1 00:39:53.826450 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:39:53.826646 systemd[1]: Finished motdgen.service. Nov 1 00:39:53.851855 systemd-networkd[1005]: eth1: Gained IPv6LL Nov 1 00:39:53.855241 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:39:53.855849 systemd[1]: Reached target network-online.target. Nov 1 00:39:53.858054 systemd[1]: Starting kubelet.service... Nov 1 00:39:53.878868 extend-filesystems[1176]: Found loop1 Nov 1 00:39:53.879887 extend-filesystems[1176]: Found vda Nov 1 00:39:53.879887 extend-filesystems[1176]: Found vda1 Nov 1 00:39:53.883229 extend-filesystems[1176]: Found vda2 Nov 1 00:39:53.883833 extend-filesystems[1176]: Found vda3 Nov 1 00:39:53.883833 extend-filesystems[1176]: Found usr Nov 1 00:39:53.883833 extend-filesystems[1176]: Found vda4 Nov 1 00:39:53.883833 extend-filesystems[1176]: Found vda6 Nov 1 00:39:53.886341 extend-filesystems[1176]: Found vda7 Nov 1 00:39:53.886341 extend-filesystems[1176]: Found vda9 Nov 1 00:39:53.886341 extend-filesystems[1176]: Checking size of /dev/vda9 Nov 1 00:39:53.897784 bash[1221]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:39:53.898384 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:39:53.932857 extend-filesystems[1176]: Resized partition /dev/vda9 Nov 1 00:39:53.938156 systemd-logind[1184]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:39:53.938571 systemd-logind[1184]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:39:53.939368 systemd-logind[1184]: New seat seat0. Nov 1 00:39:53.942861 systemd[1]: Started systemd-logind.service. Nov 1 00:39:53.951671 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:39:53.957007 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:39:53.970541 update_engine[1185]: I1101 00:39:53.969822 1185 main.cc:92] Flatcar Update Engine starting Nov 1 00:39:53.984199 update_engine[1185]: I1101 00:39:53.976517 1185 update_check_scheduler.cc:74] Next update check in 8m55s Nov 1 00:39:53.976681 systemd[1]: Started update-engine.service. Nov 1 00:39:53.979614 systemd[1]: Started locksmithd.service. Nov 1 00:39:54.004939 env[1194]: time="2025-11-01T00:39:54.004853380Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:39:54.076019 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:39:54.088323 extend-filesystems[1226]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:39:54.088323 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:39:54.088323 extend-filesystems[1226]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:39:54.095813 extend-filesystems[1176]: Resized filesystem in /dev/vda9 Nov 1 00:39:54.095813 extend-filesystems[1176]: Found vdb Nov 1 00:39:54.089344 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:39:54.089548 systemd[1]: Finished extend-filesystems.service. Nov 1 00:39:54.121062 coreos-metadata[1171]: Nov 01 00:39:54.118 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:39:54.127179 env[1194]: time="2025-11-01T00:39:54.127121022Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:39:54.127536 env[1194]: time="2025-11-01T00:39:54.127510108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.129621 env[1194]: time="2025-11-01T00:39:54.129555828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:54.129756 env[1194]: time="2025-11-01T00:39:54.129738690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.130183 env[1194]: time="2025-11-01T00:39:54.130144833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:54.130292 env[1194]: time="2025-11-01T00:39:54.130275254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.130360 env[1194]: time="2025-11-01T00:39:54.130344841Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:39:54.130419 env[1194]: time="2025-11-01T00:39:54.130404985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.130571 env[1194]: time="2025-11-01T00:39:54.130555637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.130924 env[1194]: time="2025-11-01T00:39:54.130900736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:39:54.131253 env[1194]: time="2025-11-01T00:39:54.131226271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:39:54.131340 env[1194]: time="2025-11-01T00:39:54.131323618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:39:54.131480 env[1194]: time="2025-11-01T00:39:54.131462915Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:39:54.131625 env[1194]: time="2025-11-01T00:39:54.131608880Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:39:54.137060 coreos-metadata[1171]: Nov 01 00:39:54.134 INFO Fetch successful Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138240218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138295110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138308231Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138342449Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138358317Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138372005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138384277Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138399210Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138413293Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138427968Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138440988Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138453894Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138611827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:39:54.139052 env[1194]: time="2025-11-01T00:39:54.138708742Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:39:54.139532 env[1194]: time="2025-11-01T00:39:54.138985340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139026362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139658839Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139751229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139812672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139834736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139858746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139871720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139884503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139896384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139908498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.139931221Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.140115796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.140131238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.140143020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141004 env[1194]: time="2025-11-01T00:39:54.140168922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:39:54.141607 env[1194]: time="2025-11-01T00:39:54.140184977Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:39:54.141607 env[1194]: time="2025-11-01T00:39:54.140197926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:39:54.141607 env[1194]: time="2025-11-01T00:39:54.140216986Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:39:54.141607 env[1194]: time="2025-11-01T00:39:54.140267242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:39:54.141718 env[1194]: time="2025-11-01T00:39:54.140537426Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:39:54.141718 env[1194]: time="2025-11-01T00:39:54.140606157Z" level=info msg="Connect containerd service" Nov 1 00:39:54.141718 env[1194]: time="2025-11-01T00:39:54.140653932Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142279744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142346560Z" level=info msg="Start subscribing containerd event" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142388695Z" level=info msg="Start recovering state" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142455695Z" level=info msg="Start event monitor" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142472372Z" level=info msg="Start snapshots syncer" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142481244Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.142488551Z" level=info msg="Start streaming server" Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.143099206Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:39:54.145231 env[1194]: time="2025-11-01T00:39:54.143431797Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:39:54.153566 unknown[1171]: wrote ssh authorized keys file for user: core Nov 1 00:39:54.165479 systemd[1]: Started containerd.service. Nov 1 00:39:54.167702 env[1194]: time="2025-11-01T00:39:54.166819787Z" level=info msg="containerd successfully booted in 0.163170s" Nov 1 00:39:54.169406 update-ssh-keys[1235]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:39:54.169984 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:39:54.425391 systemd-networkd[1005]: eth0: Gained IPv6LL Nov 1 00:39:54.975959 tar[1190]: linux-amd64/README.md Nov 1 00:39:54.982586 systemd[1]: Finished prepare-helm.service. Nov 1 00:39:55.048637 locksmithd[1227]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:39:55.536432 systemd[1]: Started kubelet.service. Nov 1 00:39:55.546022 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:39:55.574465 systemd[1]: Finished sshd-keygen.service. Nov 1 00:39:55.576925 systemd[1]: Starting issuegen.service... Nov 1 00:39:55.585585 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:39:55.585785 systemd[1]: Finished issuegen.service. Nov 1 00:39:55.588231 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:39:55.600414 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:39:55.602817 systemd[1]: Started getty@tty1.service. Nov 1 00:39:55.605619 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:39:55.606414 systemd[1]: Reached target getty.target. Nov 1 00:39:55.606982 systemd[1]: Reached target multi-user.target. Nov 1 00:39:55.609278 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:39:55.622932 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:39:55.623127 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:39:55.624031 systemd[1]: Startup finished in 1.015s (kernel) + 5.102s (initrd) + 7.762s (userspace) = 13.880s. Nov 1 00:39:55.910917 systemd[1]: Created slice system-sshd.slice. Nov 1 00:39:55.913210 systemd[1]: Started sshd@0-143.198.72.73:22-139.178.89.65:50054.service. Nov 1 00:39:56.003511 sshd[1265]: Accepted publickey for core from 139.178.89.65 port 50054 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.005308 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.028054 systemd[1]: Created slice user-500.slice. Nov 1 00:39:56.031192 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:39:56.048688 systemd-logind[1184]: New session 1 of user core. Nov 1 00:39:56.057119 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:39:56.059992 systemd[1]: Starting user@500.service... Nov 1 00:39:56.069232 (systemd)[1268]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.213456 systemd[1268]: Queued start job for default target default.target. Nov 1 00:39:56.214409 systemd[1268]: Reached target paths.target. Nov 1 00:39:56.214443 systemd[1268]: Reached target sockets.target. Nov 1 00:39:56.214464 systemd[1268]: Reached target timers.target. Nov 1 00:39:56.214483 systemd[1268]: Reached target basic.target. Nov 1 00:39:56.214641 systemd[1]: Started user@500.service. Nov 1 00:39:56.216214 systemd[1]: Started session-1.scope. Nov 1 00:39:56.216963 systemd[1268]: Reached target default.target. Nov 1 00:39:56.217257 systemd[1268]: Startup finished in 133ms. Nov 1 00:39:56.286925 systemd[1]: Started sshd@1-143.198.72.73:22-139.178.89.65:50066.service. Nov 1 00:39:56.352631 sshd[1277]: Accepted publickey for core from 139.178.89.65 port 50066 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.353488 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.360728 systemd[1]: Started session-2.scope. Nov 1 00:39:56.362905 systemd-logind[1184]: New session 2 of user core. Nov 1 00:39:56.379809 kubelet[1243]: E1101 00:39:56.379728 1243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:39:56.382519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:39:56.382700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:39:56.383048 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Nov 1 00:39:56.431334 sshd[1277]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:56.439630 systemd[1]: Started sshd@2-143.198.72.73:22-139.178.89.65:50076.service. Nov 1 00:39:56.440467 systemd[1]: sshd@1-143.198.72.73:22-139.178.89.65:50066.service: Deactivated successfully. Nov 1 00:39:56.441556 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:39:56.443554 systemd-logind[1184]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:39:56.444721 systemd-logind[1184]: Removed session 2. Nov 1 00:39:56.490767 sshd[1282]: Accepted publickey for core from 139.178.89.65 port 50076 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.492919 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.500745 systemd-logind[1184]: New session 3 of user core. Nov 1 00:39:56.500763 systemd[1]: Started session-3.scope. Nov 1 00:39:56.566756 sshd[1282]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:56.572617 systemd[1]: sshd@2-143.198.72.73:22-139.178.89.65:50076.service: Deactivated successfully. Nov 1 00:39:56.573639 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:39:56.574512 systemd-logind[1184]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:39:56.576835 systemd[1]: Started sshd@3-143.198.72.73:22-139.178.89.65:50080.service. Nov 1 00:39:56.578537 systemd-logind[1184]: Removed session 3. Nov 1 00:39:56.630947 sshd[1289]: Accepted publickey for core from 139.178.89.65 port 50080 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.633359 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.639116 systemd[1]: Started session-4.scope. Nov 1 00:39:56.639663 systemd-logind[1184]: New session 4 of user core. Nov 1 00:39:56.706907 sshd[1289]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:56.712964 systemd[1]: sshd@3-143.198.72.73:22-139.178.89.65:50080.service: Deactivated successfully. Nov 1 00:39:56.713708 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:39:56.714325 systemd-logind[1184]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:39:56.715924 systemd[1]: Started sshd@4-143.198.72.73:22-139.178.89.65:50090.service. Nov 1 00:39:56.717475 systemd-logind[1184]: Removed session 4. Nov 1 00:39:56.769077 sshd[1295]: Accepted publickey for core from 139.178.89.65 port 50090 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.771668 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.778089 systemd-logind[1184]: New session 5 of user core. Nov 1 00:39:56.778356 systemd[1]: Started session-5.scope. Nov 1 00:39:56.856360 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:39:56.856629 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:39:56.897983 systemd[1]: Starting docker.service... Nov 1 00:39:56.952453 env[1308]: time="2025-11-01T00:39:56.952379451Z" level=info msg="Starting up" Nov 1 00:39:56.955220 env[1308]: time="2025-11-01T00:39:56.955171072Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:39:56.955220 env[1308]: time="2025-11-01T00:39:56.955198408Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:39:56.955220 env[1308]: time="2025-11-01T00:39:56.955222439Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:39:56.955494 env[1308]: time="2025-11-01T00:39:56.955241035Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:39:56.957624 env[1308]: time="2025-11-01T00:39:56.957105595Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:39:56.957624 env[1308]: time="2025-11-01T00:39:56.957127692Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:39:56.957624 env[1308]: time="2025-11-01T00:39:56.957142913Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:39:56.957624 env[1308]: time="2025-11-01T00:39:56.957151866Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:39:57.003499 env[1308]: time="2025-11-01T00:39:57.003454763Z" level=info msg="Loading containers: start." Nov 1 00:39:57.172078 kernel: Initializing XFRM netlink socket Nov 1 00:39:57.216804 env[1308]: time="2025-11-01T00:39:57.216723397Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:39:57.309460 systemd-networkd[1005]: docker0: Link UP Nov 1 00:39:57.327110 env[1308]: time="2025-11-01T00:39:57.327048973Z" level=info msg="Loading containers: done." Nov 1 00:39:57.345655 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1707545908-merged.mount: Deactivated successfully. Nov 1 00:39:57.348611 env[1308]: time="2025-11-01T00:39:57.348558712Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:39:57.349127 env[1308]: time="2025-11-01T00:39:57.349093288Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:39:57.349402 env[1308]: time="2025-11-01T00:39:57.349383694Z" level=info msg="Daemon has completed initialization" Nov 1 00:39:57.363812 systemd[1]: Started docker.service. Nov 1 00:39:57.375490 env[1308]: time="2025-11-01T00:39:57.375420828Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:39:57.400135 systemd[1]: Starting coreos-metadata.service... Nov 1 00:39:57.449723 coreos-metadata[1425]: Nov 01 00:39:57.449 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:39:57.462384 coreos-metadata[1425]: Nov 01 00:39:57.462 INFO Fetch successful Nov 1 00:39:57.477067 systemd[1]: Finished coreos-metadata.service. Nov 1 00:39:58.324952 env[1194]: time="2025-11-01T00:39:58.324880478Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:39:58.901437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610379633.mount: Deactivated successfully. Nov 1 00:40:00.342526 env[1194]: time="2025-11-01T00:40:00.342451911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:00.344601 env[1194]: time="2025-11-01T00:40:00.344549166Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:00.347424 env[1194]: time="2025-11-01T00:40:00.347370147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:00.354263 env[1194]: time="2025-11-01T00:40:00.354194948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:40:00.355735 env[1194]: time="2025-11-01T00:40:00.355687829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:40:00.356305 env[1194]: time="2025-11-01T00:40:00.356267789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:02.258456 env[1194]: time="2025-11-01T00:40:02.258354022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:02.265837 env[1194]: time="2025-11-01T00:40:02.265759027Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:02.270178 env[1194]: time="2025-11-01T00:40:02.270064927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:02.276462 env[1194]: time="2025-11-01T00:40:02.276386699Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:02.279448 env[1194]: time="2025-11-01T00:40:02.278670798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:40:02.280614 env[1194]: time="2025-11-01T00:40:02.280535582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:40:03.761020 env[1194]: time="2025-11-01T00:40:03.760916327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:03.763299 env[1194]: time="2025-11-01T00:40:03.763238876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:03.767732 env[1194]: time="2025-11-01T00:40:03.766571201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:03.769138 env[1194]: time="2025-11-01T00:40:03.769086453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:03.769970 env[1194]: time="2025-11-01T00:40:03.769914764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:40:03.770819 env[1194]: time="2025-11-01T00:40:03.770778683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:40:04.945913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647223828.mount: Deactivated successfully. Nov 1 00:40:05.605408 env[1194]: time="2025-11-01T00:40:05.605327476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:05.607037 env[1194]: time="2025-11-01T00:40:05.606981297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:05.608332 env[1194]: time="2025-11-01T00:40:05.608295994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:05.610059 env[1194]: time="2025-11-01T00:40:05.610008678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:05.610632 env[1194]: time="2025-11-01T00:40:05.610578291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:40:05.611615 env[1194]: time="2025-11-01T00:40:05.611559102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:40:06.061372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118439205.mount: Deactivated successfully. Nov 1 00:40:06.633654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:40:06.633846 systemd[1]: Stopped kubelet.service. Nov 1 00:40:06.633897 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Nov 1 00:40:06.636035 systemd[1]: Starting kubelet.service... Nov 1 00:40:06.798258 systemd[1]: Started kubelet.service. Nov 1 00:40:06.884036 kubelet[1446]: E1101 00:40:06.883962 1446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:40:06.887935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:40:06.888099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:40:07.446363 env[1194]: time="2025-11-01T00:40:07.446297349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.448228 env[1194]: time="2025-11-01T00:40:07.448170072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.449336 env[1194]: time="2025-11-01T00:40:07.449279247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.451713 env[1194]: time="2025-11-01T00:40:07.451524336Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.452904 env[1194]: time="2025-11-01T00:40:07.452846435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:40:07.453807 env[1194]: time="2025-11-01T00:40:07.453761423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:40:07.924006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288254897.mount: Deactivated successfully. Nov 1 00:40:07.929933 env[1194]: time="2025-11-01T00:40:07.929866606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.932302 env[1194]: time="2025-11-01T00:40:07.932240924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.933537 env[1194]: time="2025-11-01T00:40:07.933497092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.935019 env[1194]: time="2025-11-01T00:40:07.934956848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:07.935922 env[1194]: time="2025-11-01T00:40:07.935867936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:40:07.936756 env[1194]: time="2025-11-01T00:40:07.936716906Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:40:11.647026 env[1194]: time="2025-11-01T00:40:11.646949194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:11.650232 env[1194]: time="2025-11-01T00:40:11.650182372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:11.655372 env[1194]: time="2025-11-01T00:40:11.655307852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:11.659867 env[1194]: time="2025-11-01T00:40:11.659811451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:11.661243 env[1194]: time="2025-11-01T00:40:11.661185237Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:40:15.388712 systemd[1]: Stopped kubelet.service. Nov 1 00:40:15.392038 systemd[1]: Starting kubelet.service... Nov 1 00:40:15.432222 systemd[1]: Reloading. Nov 1 00:40:15.588238 /usr/lib/systemd/system-generators/torcx-generator[1498]: time="2025-11-01T00:40:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:40:15.588284 /usr/lib/systemd/system-generators/torcx-generator[1498]: time="2025-11-01T00:40:15Z" level=info msg="torcx already run" Nov 1 00:40:15.732178 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:40:15.732449 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:40:15.753057 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:40:15.876228 systemd[1]: Stopping kubelet.service... Nov 1 00:40:15.877239 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:40:15.877501 systemd[1]: Stopped kubelet.service. Nov 1 00:40:15.879955 systemd[1]: Starting kubelet.service... Nov 1 00:40:16.013634 systemd[1]: Started kubelet.service. Nov 1 00:40:16.101555 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:40:16.101555 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:40:16.102784 kubelet[1549]: I1101 00:40:16.102697 1549 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:40:16.442283 kubelet[1549]: I1101 00:40:16.441585 1549 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:40:16.442283 kubelet[1549]: I1101 00:40:16.441663 1549 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:40:16.442613 kubelet[1549]: I1101 00:40:16.442573 1549 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:40:16.442663 kubelet[1549]: I1101 00:40:16.442620 1549 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:40:16.443087 kubelet[1549]: I1101 00:40:16.443032 1549 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:40:16.451933 kubelet[1549]: E1101 00:40:16.451885 1549 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.72.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:40:16.452671 kubelet[1549]: I1101 00:40:16.452621 1549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:40:16.462512 kubelet[1549]: E1101 00:40:16.462459 1549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:40:16.462712 kubelet[1549]: I1101 00:40:16.462559 1549 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:40:16.470573 kubelet[1549]: I1101 00:40:16.470524 1549 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:40:16.470892 kubelet[1549]: I1101 00:40:16.470855 1549 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:40:16.471250 kubelet[1549]: I1101 00:40:16.470892 1549 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-39b63463e5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:40:16.471250 kubelet[1549]: I1101 00:40:16.471153 1549 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:40:16.471250 kubelet[1549]: I1101 00:40:16.471169 1549 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:40:16.471514 kubelet[1549]: I1101 00:40:16.471307 1549 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:40:16.473743 kubelet[1549]: I1101 00:40:16.473661 1549 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:40:16.475596 kubelet[1549]: I1101 00:40:16.475554 1549 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:40:16.475596 kubelet[1549]: I1101 00:40:16.475604 1549 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:40:16.475806 kubelet[1549]: I1101 00:40:16.475648 1549 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:40:16.475806 kubelet[1549]: I1101 00:40:16.475671 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:40:16.479215 kubelet[1549]: I1101 00:40:16.479177 1549 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:40:16.480764 kubelet[1549]: I1101 00:40:16.479972 1549 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:40:16.480764 kubelet[1549]: I1101 00:40:16.480046 1549 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:40:16.480764 kubelet[1549]: W1101 00:40:16.480153 1549 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:40:16.484714 kubelet[1549]: E1101 00:40:16.484675 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.72.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:40:16.485055 kubelet[1549]: E1101 00:40:16.485028 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.72.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-39b63463e5&limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:40:16.485380 kubelet[1549]: I1101 00:40:16.485348 1549 server.go:1262] "Started kubelet" Nov 1 00:40:16.488628 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:40:16.489029 kubelet[1549]: I1101 00:40:16.488944 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:40:16.495930 kubelet[1549]: E1101 00:40:16.494401 1549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.72.73:6443/api/v1/namespaces/default/events\": dial tcp 143.198.72.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-39b63463e5.1873bb1defe69aba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-39b63463e5,UID:ci-3510.3.8-n-39b63463e5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-39b63463e5,},FirstTimestamp:2025-11-01 00:40:16.485309114 +0000 UTC m=+0.453691609,LastTimestamp:2025-11-01 00:40:16.485309114 +0000 UTC m=+0.453691609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-39b63463e5,}" Nov 1 00:40:16.497113 kubelet[1549]: I1101 00:40:16.496349 1549 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:40:16.499205 kubelet[1549]: I1101 00:40:16.499174 1549 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:40:16.501323 kubelet[1549]: I1101 00:40:16.501287 1549 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:40:16.502166 kubelet[1549]: E1101 00:40:16.502124 1549 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-39b63463e5\" not found" Nov 1 00:40:16.503411 kubelet[1549]: I1101 00:40:16.503366 1549 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:40:16.503554 kubelet[1549]: I1101 00:40:16.503453 1549 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:40:16.504603 kubelet[1549]: E1101 00:40:16.504577 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.72.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:40:16.504869 kubelet[1549]: E1101 00:40:16.504843 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.72.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-39b63463e5?timeout=10s\": dial tcp 143.198.72.73:6443: connect: connection refused" interval="200ms" Nov 1 00:40:16.505944 kubelet[1549]: I1101 00:40:16.505915 1549 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:40:16.506233 kubelet[1549]: I1101 00:40:16.506206 1549 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:40:16.508264 kubelet[1549]: I1101 00:40:16.508232 1549 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:40:16.508773 kubelet[1549]: I1101 00:40:16.508749 1549 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:40:16.509085 kubelet[1549]: I1101 00:40:16.509071 1549 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:40:16.509240 kubelet[1549]: I1101 00:40:16.508695 1549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:40:16.509367 kubelet[1549]: E1101 00:40:16.508438 1549 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:40:16.509440 kubelet[1549]: I1101 00:40:16.508571 1549 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:40:16.530252 kubelet[1549]: I1101 00:40:16.530220 1549 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:40:16.530614 kubelet[1549]: I1101 00:40:16.530595 1549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:40:16.530744 kubelet[1549]: I1101 00:40:16.530731 1549 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:40:16.535679 kubelet[1549]: I1101 00:40:16.535647 1549 policy_none.go:49] "None policy: Start" Nov 1 00:40:16.535891 kubelet[1549]: I1101 00:40:16.535877 1549 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:40:16.536026 kubelet[1549]: I1101 00:40:16.536008 1549 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:40:16.537202 kubelet[1549]: I1101 00:40:16.537182 1549 policy_none.go:47] "Start" Nov 1 00:40:16.542670 systemd[1]: Created slice kubepods.slice. Nov 1 00:40:16.549590 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:40:16.561137 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:40:16.566900 kubelet[1549]: E1101 00:40:16.566865 1549 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:40:16.567090 kubelet[1549]: I1101 00:40:16.567062 1549 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:40:16.567153 kubelet[1549]: I1101 00:40:16.567075 1549 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:40:16.567408 kubelet[1549]: I1101 00:40:16.567388 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:40:16.568919 kubelet[1549]: E1101 00:40:16.568897 1549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:40:16.569123 kubelet[1549]: E1101 00:40:16.569105 1549 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-39b63463e5\" not found" Nov 1 00:40:16.574669 kubelet[1549]: I1101 00:40:16.574634 1549 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:40:16.576424 kubelet[1549]: I1101 00:40:16.576387 1549 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:40:16.576424 kubelet[1549]: I1101 00:40:16.576421 1549 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:40:16.577332 kubelet[1549]: I1101 00:40:16.577306 1549 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:40:16.577433 kubelet[1549]: E1101 00:40:16.577387 1549 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 1 00:40:16.578363 kubelet[1549]: E1101 00:40:16.578329 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.72.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:40:16.669355 kubelet[1549]: I1101 00:40:16.669317 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.669955 kubelet[1549]: E1101 00:40:16.669923 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.72.73:6443/api/v1/nodes\": dial tcp 143.198.72.73:6443: connect: connection refused" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.687807 systemd[1]: Created slice kubepods-burstable-pod481159a788ccf67abecc90cf3a156d86.slice. Nov 1 00:40:16.694844 kubelet[1549]: E1101 00:40:16.694402 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.702044 systemd[1]: Created slice kubepods-burstable-pod2700648bc049198e7de2f1cf298d2298.slice. Nov 1 00:40:16.704600 kubelet[1549]: I1101 00:40:16.704557 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.704880 kubelet[1549]: I1101 00:40:16.704852 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.705727 kubelet[1549]: I1101 00:40:16.705668 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.705951 kubelet[1549]: I1101 00:40:16.705923 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706155 kubelet[1549]: I1101 00:40:16.706129 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706315 kubelet[1549]: I1101 00:40:16.706292 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce92f517e597182bfa59fb7407cf8b1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-39b63463e5\" (UID: \"5ce92f517e597182bfa59fb7407cf8b1\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706473 kubelet[1549]: I1101 00:40:16.706451 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706626 kubelet[1549]: I1101 00:40:16.706593 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706770 kubelet[1549]: I1101 00:40:16.706751 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.706884 kubelet[1549]: E1101 00:40:16.705593 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.72.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-39b63463e5?timeout=10s\": dial tcp 143.198.72.73:6443: connect: connection refused" interval="400ms" Nov 1 00:40:16.709936 kubelet[1549]: E1101 00:40:16.709898 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.714333 systemd[1]: Created slice kubepods-burstable-pod5ce92f517e597182bfa59fb7407cf8b1.slice. Nov 1 00:40:16.717171 kubelet[1549]: E1101 00:40:16.717124 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.872263 kubelet[1549]: I1101 00:40:16.872213 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.873093 kubelet[1549]: E1101 00:40:16.873047 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.72.73:6443/api/v1/nodes\": dial tcp 143.198.72.73:6443: connect: connection refused" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:16.997785 kubelet[1549]: E1101 00:40:16.997727 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:16.999158 env[1194]: time="2025-11-01T00:40:16.999055639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-39b63463e5,Uid:481159a788ccf67abecc90cf3a156d86,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:17.012145 kubelet[1549]: E1101 00:40:17.012107 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:17.013161 env[1194]: time="2025-11-01T00:40:17.013101935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-39b63463e5,Uid:2700648bc049198e7de2f1cf298d2298,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:17.019379 kubelet[1549]: E1101 00:40:17.019344 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:17.020235 env[1194]: time="2025-11-01T00:40:17.020177782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-39b63463e5,Uid:5ce92f517e597182bfa59fb7407cf8b1,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:17.108198 kubelet[1549]: E1101 00:40:17.108140 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.72.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-39b63463e5?timeout=10s\": dial tcp 143.198.72.73:6443: connect: connection refused" interval="800ms" Nov 1 00:40:17.275794 kubelet[1549]: I1101 00:40:17.274695 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:17.275794 kubelet[1549]: E1101 00:40:17.275258 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.72.73:6443/api/v1/nodes\": dial tcp 143.198.72.73:6443: connect: connection refused" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:17.342781 kubelet[1549]: E1101 00:40:17.342721 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.72.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:40:17.450595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891618960.mount: Deactivated successfully. Nov 1 00:40:17.457075 env[1194]: time="2025-11-01T00:40:17.457011264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.458546 env[1194]: time="2025-11-01T00:40:17.458499138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.461686 env[1194]: time="2025-11-01T00:40:17.461632658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.464114 env[1194]: time="2025-11-01T00:40:17.464060133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.467936 env[1194]: time="2025-11-01T00:40:17.467878068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.472153 env[1194]: time="2025-11-01T00:40:17.472094589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.473093 env[1194]: time="2025-11-01T00:40:17.473050279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.474048 env[1194]: time="2025-11-01T00:40:17.474001395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.475199 env[1194]: time="2025-11-01T00:40:17.475158200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.476184 env[1194]: time="2025-11-01T00:40:17.476145192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.477021 env[1194]: time="2025-11-01T00:40:17.476962553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.477953 env[1194]: time="2025-11-01T00:40:17.477910422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:17.494706 kubelet[1549]: E1101 00:40:17.494525 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.72.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-39b63463e5&limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:40:17.522088 env[1194]: time="2025-11-01T00:40:17.521958669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:17.522088 env[1194]: time="2025-11-01T00:40:17.522033373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:17.522382 env[1194]: time="2025-11-01T00:40:17.522083243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:17.522724 env[1194]: time="2025-11-01T00:40:17.522648696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:17.522922 env[1194]: time="2025-11-01T00:40:17.522875954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:17.523143 env[1194]: time="2025-11-01T00:40:17.523097022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:17.523524 env[1194]: time="2025-11-01T00:40:17.523473682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18edc0d42b56fe4daad20ebdbf2fb1b091aaaafc81ae055496abeaec047c7f1b pid=1596 runtime=io.containerd.runc.v2 Nov 1 00:40:17.523809 env[1194]: time="2025-11-01T00:40:17.523776298Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5179f74a3103834898d5aebbcf097f1470f95994c113cc18995e5dc5360a313 pid=1603 runtime=io.containerd.runc.v2 Nov 1 00:40:17.545757 systemd[1]: Started cri-containerd-f5179f74a3103834898d5aebbcf097f1470f95994c113cc18995e5dc5360a313.scope. Nov 1 00:40:17.559188 systemd[1]: Started cri-containerd-18edc0d42b56fe4daad20ebdbf2fb1b091aaaafc81ae055496abeaec047c7f1b.scope. Nov 1 00:40:17.568628 env[1194]: time="2025-11-01T00:40:17.568385137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:17.568628 env[1194]: time="2025-11-01T00:40:17.568449696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:17.568628 env[1194]: time="2025-11-01T00:40:17.568466704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:17.573952 env[1194]: time="2025-11-01T00:40:17.573716918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ab6637ac95a0dccf84f36ab321c7d51350c39396e61491cb8babebdbd058fd8 pid=1645 runtime=io.containerd.runc.v2 Nov 1 00:40:17.615115 systemd[1]: Started cri-containerd-3ab6637ac95a0dccf84f36ab321c7d51350c39396e61491cb8babebdbd058fd8.scope. Nov 1 00:40:17.630247 env[1194]: time="2025-11-01T00:40:17.630198449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-39b63463e5,Uid:481159a788ccf67abecc90cf3a156d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5179f74a3103834898d5aebbcf097f1470f95994c113cc18995e5dc5360a313\"" Nov 1 00:40:17.631695 kubelet[1549]: E1101 00:40:17.631370 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:17.638021 env[1194]: time="2025-11-01T00:40:17.637949893Z" level=info msg="CreateContainer within sandbox \"f5179f74a3103834898d5aebbcf097f1470f95994c113cc18995e5dc5360a313\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:40:17.656164 env[1194]: time="2025-11-01T00:40:17.656114638Z" level=info msg="CreateContainer within sandbox \"f5179f74a3103834898d5aebbcf097f1470f95994c113cc18995e5dc5360a313\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a1a916dcad30c90772ed5ef13e5a10cc3eeeeeeb3a44545839057bfad924ddb\"" Nov 1 00:40:17.658211 env[1194]: time="2025-11-01T00:40:17.658165232Z" level=info msg="StartContainer for \"3a1a916dcad30c90772ed5ef13e5a10cc3eeeeeeb3a44545839057bfad924ddb\"" Nov 1 00:40:17.668699 env[1194]: time="2025-11-01T00:40:17.668631651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-39b63463e5,Uid:2700648bc049198e7de2f1cf298d2298,Namespace:kube-system,Attempt:0,} returns sandbox id \"18edc0d42b56fe4daad20ebdbf2fb1b091aaaafc81ae055496abeaec047c7f1b\"" Nov 1 00:40:17.670460 kubelet[1549]: E1101 00:40:17.670241 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:17.675538 env[1194]: time="2025-11-01T00:40:17.675487021Z" level=info msg="CreateContainer within sandbox \"18edc0d42b56fe4daad20ebdbf2fb1b091aaaafc81ae055496abeaec047c7f1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:40:17.697138 env[1194]: time="2025-11-01T00:40:17.697073043Z" level=info msg="CreateContainer within sandbox \"18edc0d42b56fe4daad20ebdbf2fb1b091aaaafc81ae055496abeaec047c7f1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b51aabacabaa0d7b6647b31b29ca6b573f9171ba20573e9d100e84c3383a238\"" Nov 1 00:40:17.698236 systemd[1]: Started cri-containerd-3a1a916dcad30c90772ed5ef13e5a10cc3eeeeeeb3a44545839057bfad924ddb.scope. Nov 1 00:40:17.705026 env[1194]: time="2025-11-01T00:40:17.704948782Z" level=info msg="StartContainer for \"8b51aabacabaa0d7b6647b31b29ca6b573f9171ba20573e9d100e84c3383a238\"" Nov 1 00:40:17.713300 kubelet[1549]: E1101 00:40:17.713246 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.72.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:40:17.718751 env[1194]: time="2025-11-01T00:40:17.718701858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-39b63463e5,Uid:5ce92f517e597182bfa59fb7407cf8b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ab6637ac95a0dccf84f36ab321c7d51350c39396e61491cb8babebdbd058fd8\"" Nov 1 00:40:17.720832 kubelet[1549]: E1101 00:40:17.720768 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:17.724748 env[1194]: time="2025-11-01T00:40:17.724685648Z" level=info msg="CreateContainer within sandbox \"3ab6637ac95a0dccf84f36ab321c7d51350c39396e61491cb8babebdbd058fd8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:40:17.743934 systemd[1]: Started cri-containerd-8b51aabacabaa0d7b6647b31b29ca6b573f9171ba20573e9d100e84c3383a238.scope. Nov 1 00:40:17.751949 env[1194]: time="2025-11-01T00:40:17.751878326Z" level=info msg="CreateContainer within sandbox \"3ab6637ac95a0dccf84f36ab321c7d51350c39396e61491cb8babebdbd058fd8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb0abca25bb7f5714b787b908d67f440469e4b6438ba917c72f79fc9754fa776\"" Nov 1 00:40:17.752960 env[1194]: time="2025-11-01T00:40:17.752908463Z" level=info msg="StartContainer for \"bb0abca25bb7f5714b787b908d67f440469e4b6438ba917c72f79fc9754fa776\"" Nov 1 00:40:17.789200 env[1194]: time="2025-11-01T00:40:17.789135979Z" level=info msg="StartContainer for \"3a1a916dcad30c90772ed5ef13e5a10cc3eeeeeeb3a44545839057bfad924ddb\" returns successfully" Nov 1 00:40:17.810282 systemd[1]: Started cri-containerd-bb0abca25bb7f5714b787b908d67f440469e4b6438ba917c72f79fc9754fa776.scope. Nov 1 00:40:17.853762 env[1194]: time="2025-11-01T00:40:17.853698937Z" level=info msg="StartContainer for \"8b51aabacabaa0d7b6647b31b29ca6b573f9171ba20573e9d100e84c3383a238\" returns successfully" Nov 1 00:40:17.877672 env[1194]: time="2025-11-01T00:40:17.877606615Z" level=info msg="StartContainer for \"bb0abca25bb7f5714b787b908d67f440469e4b6438ba917c72f79fc9754fa776\" returns successfully" Nov 1 00:40:17.910224 kubelet[1549]: E1101 00:40:17.910169 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.72.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-39b63463e5?timeout=10s\": dial tcp 143.198.72.73:6443: connect: connection refused" interval="1.6s" Nov 1 00:40:17.974243 kubelet[1549]: E1101 00:40:17.974179 1549 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.72.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.72.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:40:18.076416 kubelet[1549]: I1101 00:40:18.076266 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:18.077119 kubelet[1549]: E1101 00:40:18.076696 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.72.73:6443/api/v1/nodes\": dial tcp 143.198.72.73:6443: connect: connection refused" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:18.592272 kubelet[1549]: E1101 00:40:18.592226 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:18.592672 kubelet[1549]: E1101 00:40:18.592432 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:18.595850 kubelet[1549]: E1101 00:40:18.595813 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:18.596038 kubelet[1549]: E1101 00:40:18.596014 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:18.599609 kubelet[1549]: E1101 00:40:18.599570 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:18.599775 kubelet[1549]: E1101 00:40:18.599754 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:19.603488 kubelet[1549]: E1101 00:40:19.603434 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:19.606164 kubelet[1549]: E1101 00:40:19.606045 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:19.622788 kubelet[1549]: E1101 00:40:19.622513 1549 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:19.622788 kubelet[1549]: E1101 00:40:19.622673 1549 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:19.678077 kubelet[1549]: I1101 00:40:19.678031 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.328918 kubelet[1549]: E1101 00:40:21.328869 1549 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-39b63463e5\" not found" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.481454 kubelet[1549]: I1101 00:40:21.481414 1549 apiserver.go:52] "Watching apiserver" Nov 1 00:40:21.483044 kubelet[1549]: I1101 00:40:21.483006 1549 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.503187 kubelet[1549]: I1101 00:40:21.503132 1549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.503536 kubelet[1549]: I1101 00:40:21.503488 1549 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:40:21.563507 kubelet[1549]: E1101 00:40:21.563465 1549 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.563791 kubelet[1549]: I1101 00:40:21.563772 1549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.566156 kubelet[1549]: E1101 00:40:21.566115 1549 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.566393 kubelet[1549]: I1101 00:40:21.566376 1549 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:21.568865 kubelet[1549]: E1101 00:40:21.568822 1549 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-39b63463e5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:23.497210 systemd[1]: Reloading. Nov 1 00:40:23.617739 /usr/lib/systemd/system-generators/torcx-generator[1850]: time="2025-11-01T00:40:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:40:23.617785 /usr/lib/systemd/system-generators/torcx-generator[1850]: time="2025-11-01T00:40:23Z" level=info msg="torcx already run" Nov 1 00:40:23.761372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:40:23.762279 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:40:23.787762 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:40:23.923566 kubelet[1549]: I1101 00:40:23.923525 1549 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:40:23.924584 systemd[1]: Stopping kubelet.service... Nov 1 00:40:23.949019 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:40:23.949235 systemd[1]: Stopped kubelet.service. Nov 1 00:40:23.951474 systemd[1]: Starting kubelet.service... Nov 1 00:40:25.062306 systemd[1]: Started kubelet.service. Nov 1 00:40:25.170572 kubelet[1900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:40:25.170572 kubelet[1900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:40:25.170572 kubelet[1900]: I1101 00:40:25.170342 1900 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:40:25.186342 kubelet[1900]: I1101 00:40:25.186286 1900 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:40:25.186701 kubelet[1900]: I1101 00:40:25.186653 1900 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:40:25.188350 sudo[1911]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:40:25.188748 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:40:25.189511 kubelet[1900]: I1101 00:40:25.189478 1900 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:40:25.189656 kubelet[1900]: I1101 00:40:25.189627 1900 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:40:25.190321 kubelet[1900]: I1101 00:40:25.190295 1900 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:40:25.198592 kubelet[1900]: I1101 00:40:25.198546 1900 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:40:25.218445 kubelet[1900]: I1101 00:40:25.218388 1900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:40:25.231354 kubelet[1900]: E1101 00:40:25.231211 1900 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:40:25.231571 kubelet[1900]: I1101 00:40:25.231383 1900 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:40:25.244417 kubelet[1900]: I1101 00:40:25.244365 1900 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:40:25.246069 kubelet[1900]: I1101 00:40:25.245940 1900 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:40:25.246565 kubelet[1900]: I1101 00:40:25.246072 1900 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-39b63463e5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:40:25.246764 kubelet[1900]: I1101 00:40:25.246574 1900 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:40:25.246764 kubelet[1900]: I1101 00:40:25.246620 1900 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:40:25.246910 kubelet[1900]: I1101 00:40:25.246797 1900 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:40:25.248749 kubelet[1900]: I1101 00:40:25.248709 1900 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:40:25.249271 kubelet[1900]: I1101 00:40:25.249244 1900 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:40:25.249362 kubelet[1900]: I1101 00:40:25.249278 1900 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:40:25.249362 kubelet[1900]: I1101 00:40:25.249328 1900 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:40:25.249362 kubelet[1900]: I1101 00:40:25.249350 1900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:40:25.254376 kubelet[1900]: I1101 00:40:25.254332 1900 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:40:25.256759 kubelet[1900]: I1101 00:40:25.256702 1900 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:40:25.256927 kubelet[1900]: I1101 00:40:25.256778 1900 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:40:25.268707 kubelet[1900]: I1101 00:40:25.268658 1900 server.go:1262] "Started kubelet" Nov 1 00:40:25.292586 kubelet[1900]: I1101 00:40:25.292536 1900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:40:25.311467 kubelet[1900]: I1101 00:40:25.309201 1900 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:40:25.323108 kubelet[1900]: I1101 00:40:25.320476 1900 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:40:25.344727 kubelet[1900]: I1101 00:40:25.344631 1900 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:40:25.344924 kubelet[1900]: I1101 00:40:25.344762 1900 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:40:25.348937 kubelet[1900]: I1101 00:40:25.348888 1900 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:40:25.352391 kubelet[1900]: I1101 00:40:25.352336 1900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:40:25.355113 kubelet[1900]: I1101 00:40:25.355072 1900 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:40:25.369451 kubelet[1900]: I1101 00:40:25.369397 1900 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:40:25.372217 kubelet[1900]: I1101 00:40:25.372177 1900 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:40:25.385496 kubelet[1900]: I1101 00:40:25.385447 1900 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:40:25.385708 kubelet[1900]: I1101 00:40:25.385619 1900 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:40:25.390357 kubelet[1900]: I1101 00:40:25.390311 1900 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:40:25.412203 kubelet[1900]: I1101 00:40:25.412125 1900 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:40:25.415199 kubelet[1900]: I1101 00:40:25.415148 1900 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:40:25.415455 kubelet[1900]: I1101 00:40:25.415433 1900 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:40:25.415598 kubelet[1900]: I1101 00:40:25.415582 1900 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:40:25.415783 kubelet[1900]: E1101 00:40:25.415747 1900 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:40:25.522253 kubelet[1900]: E1101 00:40:25.522202 1900 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:40:25.587607 kubelet[1900]: I1101 00:40:25.587487 1900 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:40:25.587607 kubelet[1900]: I1101 00:40:25.587514 1900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:40:25.587607 kubelet[1900]: I1101 00:40:25.587541 1900 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:40:25.587888 kubelet[1900]: I1101 00:40:25.587737 1900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:40:25.587888 kubelet[1900]: I1101 00:40:25.587751 1900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:40:25.587888 kubelet[1900]: I1101 00:40:25.587778 1900 policy_none.go:49] "None policy: Start" Nov 1 00:40:25.587888 kubelet[1900]: I1101 00:40:25.587805 1900 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:40:25.587888 kubelet[1900]: I1101 00:40:25.587829 1900 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:40:25.588135 kubelet[1900]: I1101 00:40:25.588024 1900 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:40:25.588135 kubelet[1900]: I1101 00:40:25.588040 1900 policy_none.go:47] "Start" Nov 1 00:40:25.602374 kubelet[1900]: E1101 00:40:25.602325 1900 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:40:25.603015 kubelet[1900]: I1101 00:40:25.602971 1900 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:40:25.603241 kubelet[1900]: I1101 00:40:25.603184 1900 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:40:25.604682 kubelet[1900]: I1101 00:40:25.604650 1900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:40:25.627100 kubelet[1900]: E1101 00:40:25.627044 1900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:40:25.716426 kubelet[1900]: I1101 00:40:25.716375 1900 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.723187 kubelet[1900]: I1101 00:40:25.723141 1900 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.723471 kubelet[1900]: I1101 00:40:25.723159 1900 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.740839 kubelet[1900]: I1101 00:40:25.740790 1900 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.757265 kubelet[1900]: I1101 00:40:25.757219 1900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:40:25.758832 kubelet[1900]: I1101 00:40:25.758490 1900 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.758832 kubelet[1900]: I1101 00:40:25.758616 1900 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.768926 kubelet[1900]: I1101 00:40:25.768852 1900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:40:25.770433 kubelet[1900]: I1101 00:40:25.770396 1900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:40:25.776658 kubelet[1900]: I1101 00:40:25.776615 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.776924 kubelet[1900]: I1101 00:40:25.776897 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777108 kubelet[1900]: I1101 00:40:25.777081 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/481159a788ccf67abecc90cf3a156d86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" (UID: \"481159a788ccf67abecc90cf3a156d86\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777253 kubelet[1900]: I1101 00:40:25.777229 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777371 kubelet[1900]: I1101 00:40:25.777351 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777513 kubelet[1900]: I1101 00:40:25.777491 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777638 kubelet[1900]: I1101 00:40:25.777617 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce92f517e597182bfa59fb7407cf8b1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-39b63463e5\" (UID: \"5ce92f517e597182bfa59fb7407cf8b1\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777792 kubelet[1900]: I1101 00:40:25.777764 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:25.777919 kubelet[1900]: I1101 00:40:25.777899 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2700648bc049198e7de2f1cf298d2298-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-39b63463e5\" (UID: \"2700648bc049198e7de2f1cf298d2298\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:26.062273 kubelet[1900]: E1101 00:40:26.062227 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:26.070066 kubelet[1900]: E1101 00:40:26.070022 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:26.071438 kubelet[1900]: E1101 00:40:26.071398 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:26.165469 sudo[1911]: pam_unix(sudo:session): session closed for user root Nov 1 00:40:26.270313 kubelet[1900]: I1101 00:40:26.270271 1900 apiserver.go:52] "Watching apiserver" Nov 1 00:40:26.370453 kubelet[1900]: I1101 00:40:26.370315 1900 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:40:26.433770 kubelet[1900]: I1101 00:40:26.432414 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" podStartSLOduration=1.432392854 podStartE2EDuration="1.432392854s" podCreationTimestamp="2025-11-01 00:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:26.422241391 +0000 UTC m=+1.337165935" watchObservedRunningTime="2025-11-01 00:40:26.432392854 +0000 UTC m=+1.347317393" Nov 1 00:40:26.446177 kubelet[1900]: I1101 00:40:26.446091 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-39b63463e5" podStartSLOduration=1.44606692 podStartE2EDuration="1.44606692s" podCreationTimestamp="2025-11-01 00:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:26.434343419 +0000 UTC m=+1.349267975" watchObservedRunningTime="2025-11-01 00:40:26.44606692 +0000 UTC m=+1.360991444" Nov 1 00:40:26.460754 kubelet[1900]: I1101 00:40:26.460666 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-39b63463e5" podStartSLOduration=1.4606429 podStartE2EDuration="1.4606429s" podCreationTimestamp="2025-11-01 00:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:26.448015784 +0000 UTC m=+1.362940333" watchObservedRunningTime="2025-11-01 00:40:26.4606429 +0000 UTC m=+1.375567438" Nov 1 00:40:26.527039 kubelet[1900]: I1101 00:40:26.526964 1900 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:26.527382 kubelet[1900]: E1101 00:40:26.527358 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:26.528311 kubelet[1900]: E1101 00:40:26.528281 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:26.550209 kubelet[1900]: I1101 00:40:26.550168 1900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:40:26.550420 kubelet[1900]: E1101 00:40:26.550269 1900 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-39b63463e5\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-39b63463e5" Nov 1 00:40:26.550510 kubelet[1900]: E1101 00:40:26.550491 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:27.528168 kubelet[1900]: E1101 00:40:27.528119 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:27.540088 kubelet[1900]: E1101 00:40:27.539130 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:27.540088 kubelet[1900]: E1101 00:40:27.539723 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:28.299313 sudo[1298]: pam_unix(sudo:session): session closed for user root Nov 1 00:40:28.304609 sshd[1295]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:28.308534 systemd[1]: sshd@4-143.198.72.73:22-139.178.89.65:50090.service: Deactivated successfully. Nov 1 00:40:28.309485 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:40:28.309673 systemd[1]: session-5.scope: Consumed 6.460s CPU time. Nov 1 00:40:28.310860 systemd-logind[1184]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:40:28.312001 systemd-logind[1184]: Removed session 5. Nov 1 00:40:28.530452 kubelet[1900]: E1101 00:40:28.530403 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:29.159641 kubelet[1900]: I1101 00:40:29.159602 1900 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:40:29.160547 env[1194]: time="2025-11-01T00:40:29.160493743Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:40:29.161706 kubelet[1900]: I1101 00:40:29.161668 1900 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:40:30.121755 systemd[1]: Created slice kubepods-besteffort-podc088c414_abb3_497a_a020_88e08e497ff4.slice. Nov 1 00:40:30.149620 systemd[1]: Created slice kubepods-burstable-podb9384eea_0d2c_4e02_9c7d_3022d6148970.slice. Nov 1 00:40:30.211139 kubelet[1900]: I1101 00:40:30.211086 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9384eea-0d2c-4e02-9c7d-3022d6148970-clustermesh-secrets\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211139 kubelet[1900]: I1101 00:40:30.211137 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdjn\" (UniqueName: \"kubernetes.io/projected/c088c414-abb3-497a-a020-88e08e497ff4-kube-api-access-zgdjn\") pod \"kube-proxy-2dm95\" (UID: \"c088c414-abb3-497a-a020-88e08e497ff4\") " pod="kube-system/kube-proxy-2dm95" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211161 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-bpf-maps\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211177 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-hostproc\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211192 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-cgroup\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211207 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-xtables-lock\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211223 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfdf\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-kube-api-access-jrfdf\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211627 kubelet[1900]: I1101 00:40:30.211239 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-config-path\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211255 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-run\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211271 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cni-path\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211296 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-etc-cni-netd\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211311 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-net\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211328 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-kernel\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211805 kubelet[1900]: I1101 00:40:30.211343 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-hubble-tls\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.211971 kubelet[1900]: I1101 00:40:30.211357 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c088c414-abb3-497a-a020-88e08e497ff4-xtables-lock\") pod \"kube-proxy-2dm95\" (UID: \"c088c414-abb3-497a-a020-88e08e497ff4\") " pod="kube-system/kube-proxy-2dm95" Nov 1 00:40:30.211971 kubelet[1900]: I1101 00:40:30.211370 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c088c414-abb3-497a-a020-88e08e497ff4-lib-modules\") pod \"kube-proxy-2dm95\" (UID: \"c088c414-abb3-497a-a020-88e08e497ff4\") " pod="kube-system/kube-proxy-2dm95" Nov 1 00:40:30.211971 kubelet[1900]: I1101 00:40:30.211386 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c088c414-abb3-497a-a020-88e08e497ff4-kube-proxy\") pod \"kube-proxy-2dm95\" (UID: \"c088c414-abb3-497a-a020-88e08e497ff4\") " pod="kube-system/kube-proxy-2dm95" Nov 1 00:40:30.211971 kubelet[1900]: I1101 00:40:30.211403 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-lib-modules\") pod \"cilium-tbb8q\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " pod="kube-system/cilium-tbb8q" Nov 1 00:40:30.265551 systemd[1]: Created slice kubepods-besteffort-pod7705ab90_815a_4f68_98ad_343a00bbfbaf.slice. Nov 1 00:40:30.312389 kubelet[1900]: I1101 00:40:30.312340 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grns\" (UniqueName: \"kubernetes.io/projected/7705ab90-815a-4f68-98ad-343a00bbfbaf-kube-api-access-9grns\") pod \"cilium-operator-6f9c7c5859-ktg9r\" (UID: \"7705ab90-815a-4f68-98ad-343a00bbfbaf\") " pod="kube-system/cilium-operator-6f9c7c5859-ktg9r" Nov 1 00:40:30.312847 kubelet[1900]: I1101 00:40:30.312825 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7705ab90-815a-4f68-98ad-343a00bbfbaf-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-ktg9r\" (UID: \"7705ab90-815a-4f68-98ad-343a00bbfbaf\") " pod="kube-system/cilium-operator-6f9c7c5859-ktg9r" Nov 1 00:40:30.321131 kubelet[1900]: I1101 00:40:30.321064 1900 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:40:30.432453 kubelet[1900]: E1101 00:40:30.432317 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.434855 env[1194]: time="2025-11-01T00:40:30.434336835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dm95,Uid:c088c414-abb3-497a-a020-88e08e497ff4,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:30.454014 env[1194]: time="2025-11-01T00:40:30.453899157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:30.454403 env[1194]: time="2025-11-01T00:40:30.453959948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:30.454403 env[1194]: time="2025-11-01T00:40:30.453977445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:30.454403 env[1194]: time="2025-11-01T00:40:30.454194556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c8b499e2b6d9b405bfdf2e03f7515b73e9b78cbf38d71df69b2563dc620d045 pid=1984 runtime=io.containerd.runc.v2 Nov 1 00:40:30.457100 kubelet[1900]: E1101 00:40:30.457065 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.459868 env[1194]: time="2025-11-01T00:40:30.459821600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbb8q,Uid:b9384eea-0d2c-4e02-9c7d-3022d6148970,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:30.474289 systemd[1]: Started cri-containerd-2c8b499e2b6d9b405bfdf2e03f7515b73e9b78cbf38d71df69b2563dc620d045.scope. Nov 1 00:40:30.492330 env[1194]: time="2025-11-01T00:40:30.490924838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:30.492330 env[1194]: time="2025-11-01T00:40:30.491001165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:30.492330 env[1194]: time="2025-11-01T00:40:30.491014300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:30.492330 env[1194]: time="2025-11-01T00:40:30.491158955Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020 pid=2017 runtime=io.containerd.runc.v2 Nov 1 00:40:30.513763 systemd[1]: Started cri-containerd-988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020.scope. Nov 1 00:40:30.524636 env[1194]: time="2025-11-01T00:40:30.524587644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dm95,Uid:c088c414-abb3-497a-a020-88e08e497ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c8b499e2b6d9b405bfdf2e03f7515b73e9b78cbf38d71df69b2563dc620d045\"" Nov 1 00:40:30.527411 kubelet[1900]: E1101 00:40:30.526528 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.535396 env[1194]: time="2025-11-01T00:40:30.535345329Z" level=info msg="CreateContainer within sandbox \"2c8b499e2b6d9b405bfdf2e03f7515b73e9b78cbf38d71df69b2563dc620d045\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:40:30.563976 env[1194]: time="2025-11-01T00:40:30.563914210Z" level=info msg="CreateContainer within sandbox \"2c8b499e2b6d9b405bfdf2e03f7515b73e9b78cbf38d71df69b2563dc620d045\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3abda1ca7e9f23ccffa962cee3d428329c458ab5a845d9f4a9761e5908d7dc4c\"" Nov 1 00:40:30.567897 env[1194]: time="2025-11-01T00:40:30.567856355Z" level=info msg="StartContainer for \"3abda1ca7e9f23ccffa962cee3d428329c458ab5a845d9f4a9761e5908d7dc4c\"" Nov 1 00:40:30.568334 env[1194]: time="2025-11-01T00:40:30.568297053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbb8q,Uid:b9384eea-0d2c-4e02-9c7d-3022d6148970,Namespace:kube-system,Attempt:0,} returns sandbox id \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\"" Nov 1 00:40:30.569823 kubelet[1900]: E1101 00:40:30.569787 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.571829 kubelet[1900]: E1101 00:40:30.571791 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.573023 env[1194]: time="2025-11-01T00:40:30.572977297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ktg9r,Uid:7705ab90-815a-4f68-98ad-343a00bbfbaf,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:30.578504 env[1194]: time="2025-11-01T00:40:30.578456782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:40:30.602237 env[1194]: time="2025-11-01T00:40:30.602109045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:30.602237 env[1194]: time="2025-11-01T00:40:30.602191054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:30.602571 env[1194]: time="2025-11-01T00:40:30.602208581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:30.604258 env[1194]: time="2025-11-01T00:40:30.604080081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961 pid=2080 runtime=io.containerd.runc.v2 Nov 1 00:40:30.608505 systemd[1]: Started cri-containerd-3abda1ca7e9f23ccffa962cee3d428329c458ab5a845d9f4a9761e5908d7dc4c.scope. Nov 1 00:40:30.640784 systemd[1]: Started cri-containerd-b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961.scope. Nov 1 00:40:30.671356 env[1194]: time="2025-11-01T00:40:30.671295114Z" level=info msg="StartContainer for \"3abda1ca7e9f23ccffa962cee3d428329c458ab5a845d9f4a9761e5908d7dc4c\" returns successfully" Nov 1 00:40:30.715542 env[1194]: time="2025-11-01T00:40:30.712739317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-ktg9r,Uid:7705ab90-815a-4f68-98ad-343a00bbfbaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\"" Nov 1 00:40:30.715761 kubelet[1900]: E1101 00:40:30.713764 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:31.560125 kubelet[1900]: E1101 00:40:31.560076 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:31.575171 kubelet[1900]: I1101 00:40:31.575097 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2dm95" podStartSLOduration=1.5750800740000002 podStartE2EDuration="1.575080074s" podCreationTimestamp="2025-11-01 00:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:31.574301111 +0000 UTC m=+6.489225655" watchObservedRunningTime="2025-11-01 00:40:31.575080074 +0000 UTC m=+6.490004619" Nov 1 00:40:32.120846 kubelet[1900]: E1101 00:40:32.120765 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:32.680056 kubelet[1900]: E1101 00:40:32.679975 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:32.682542 kubelet[1900]: E1101 00:40:32.681781 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:36.533832 kubelet[1900]: E1101 00:40:36.533794 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:36.783297 kubelet[1900]: E1101 00:40:36.783176 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:36.859127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453773726.mount: Deactivated successfully. Nov 1 00:40:38.755528 update_engine[1185]: I1101 00:40:38.755450 1185 update_attempter.cc:509] Updating boot flags... Nov 1 00:40:40.300162 env[1194]: time="2025-11-01T00:40:40.300108668Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:40.302628 env[1194]: time="2025-11-01T00:40:40.302587706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:40.304630 env[1194]: time="2025-11-01T00:40:40.304584057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:40.305487 env[1194]: time="2025-11-01T00:40:40.305455551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:40:40.307647 env[1194]: time="2025-11-01T00:40:40.307615177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:40:40.315414 env[1194]: time="2025-11-01T00:40:40.315355230Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:40:40.325338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373366267.mount: Deactivated successfully. Nov 1 00:40:40.332739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189835984.mount: Deactivated successfully. Nov 1 00:40:40.336147 env[1194]: time="2025-11-01T00:40:40.336088803Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\"" Nov 1 00:40:40.338964 env[1194]: time="2025-11-01T00:40:40.337230302Z" level=info msg="StartContainer for \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\"" Nov 1 00:40:40.366406 systemd[1]: Started cri-containerd-313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645.scope. Nov 1 00:40:40.420475 env[1194]: time="2025-11-01T00:40:40.420144881Z" level=info msg="StartContainer for \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\" returns successfully" Nov 1 00:40:40.430904 systemd[1]: cri-containerd-313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645.scope: Deactivated successfully. Nov 1 00:40:40.482598 env[1194]: time="2025-11-01T00:40:40.481978410Z" level=info msg="shim disconnected" id=313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645 Nov 1 00:40:40.482598 env[1194]: time="2025-11-01T00:40:40.482060828Z" level=warning msg="cleaning up after shim disconnected" id=313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645 namespace=k8s.io Nov 1 00:40:40.482598 env[1194]: time="2025-11-01T00:40:40.482075606Z" level=info msg="cleaning up dead shim" Nov 1 00:40:40.493888 env[1194]: time="2025-11-01T00:40:40.493766499Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2333 runtime=io.containerd.runc.v2\n" Nov 1 00:40:40.700432 kubelet[1900]: E1101 00:40:40.700312 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:40.712924 env[1194]: time="2025-11-01T00:40:40.712843538Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:40:40.726525 env[1194]: time="2025-11-01T00:40:40.726474220Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\"" Nov 1 00:40:40.727935 env[1194]: time="2025-11-01T00:40:40.727457959Z" level=info msg="StartContainer for \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\"" Nov 1 00:40:40.749868 systemd[1]: Started cri-containerd-e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea.scope. Nov 1 00:40:40.789064 env[1194]: time="2025-11-01T00:40:40.789002999Z" level=info msg="StartContainer for \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\" returns successfully" Nov 1 00:40:40.805479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:40:40.805692 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:40:40.806832 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:40:40.808904 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:40:40.811735 systemd[1]: cri-containerd-e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea.scope: Deactivated successfully. Nov 1 00:40:40.821948 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:40:40.843762 env[1194]: time="2025-11-01T00:40:40.843710427Z" level=info msg="shim disconnected" id=e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea Nov 1 00:40:40.844235 env[1194]: time="2025-11-01T00:40:40.844208089Z" level=warning msg="cleaning up after shim disconnected" id=e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea namespace=k8s.io Nov 1 00:40:40.844356 env[1194]: time="2025-11-01T00:40:40.844339842Z" level=info msg="cleaning up dead shim" Nov 1 00:40:40.855766 env[1194]: time="2025-11-01T00:40:40.855708119Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" Nov 1 00:40:41.323284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645-rootfs.mount: Deactivated successfully. Nov 1 00:40:41.695820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776315094.mount: Deactivated successfully. Nov 1 00:40:41.708971 kubelet[1900]: E1101 00:40:41.705364 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:41.718717 env[1194]: time="2025-11-01T00:40:41.718660305Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:40:41.741600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540099479.mount: Deactivated successfully. Nov 1 00:40:41.747290 env[1194]: time="2025-11-01T00:40:41.747219202Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\"" Nov 1 00:40:41.749068 env[1194]: time="2025-11-01T00:40:41.748223178Z" level=info msg="StartContainer for \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\"" Nov 1 00:40:41.787529 systemd[1]: Started cri-containerd-358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1.scope. Nov 1 00:40:41.839206 env[1194]: time="2025-11-01T00:40:41.839076455Z" level=info msg="StartContainer for \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\" returns successfully" Nov 1 00:40:41.846185 systemd[1]: cri-containerd-358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1.scope: Deactivated successfully. Nov 1 00:40:41.882178 env[1194]: time="2025-11-01T00:40:41.882058251Z" level=info msg="shim disconnected" id=358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1 Nov 1 00:40:41.882512 env[1194]: time="2025-11-01T00:40:41.882488980Z" level=warning msg="cleaning up after shim disconnected" id=358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1 namespace=k8s.io Nov 1 00:40:41.882615 env[1194]: time="2025-11-01T00:40:41.882600310Z" level=info msg="cleaning up dead shim" Nov 1 00:40:41.907104 env[1194]: time="2025-11-01T00:40:41.907040630Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Nov 1 00:40:42.474864 env[1194]: time="2025-11-01T00:40:42.474774018Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:42.476721 env[1194]: time="2025-11-01T00:40:42.476653744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:42.478214 env[1194]: time="2025-11-01T00:40:42.478176011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:40:42.478801 env[1194]: time="2025-11-01T00:40:42.478767990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:40:42.485947 env[1194]: time="2025-11-01T00:40:42.485872139Z" level=info msg="CreateContainer within sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:40:42.498994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328375409.mount: Deactivated successfully. Nov 1 00:40:42.506448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281837538.mount: Deactivated successfully. Nov 1 00:40:42.509506 env[1194]: time="2025-11-01T00:40:42.509446535Z" level=info msg="CreateContainer within sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\"" Nov 1 00:40:42.510342 env[1194]: time="2025-11-01T00:40:42.510251737Z" level=info msg="StartContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\"" Nov 1 00:40:42.530001 systemd[1]: Started cri-containerd-eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371.scope. Nov 1 00:40:42.575477 env[1194]: time="2025-11-01T00:40:42.575401278Z" level=info msg="StartContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" returns successfully" Nov 1 00:40:42.710539 kubelet[1900]: E1101 00:40:42.710488 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:42.716783 kubelet[1900]: E1101 00:40:42.716737 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:42.726185 env[1194]: time="2025-11-01T00:40:42.726047952Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:40:42.737778 kubelet[1900]: I1101 00:40:42.737712 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-ktg9r" podStartSLOduration=0.972202941 podStartE2EDuration="12.737689414s" podCreationTimestamp="2025-11-01 00:40:30 +0000 UTC" firstStartedPulling="2025-11-01 00:40:30.714788595 +0000 UTC m=+5.629713119" lastFinishedPulling="2025-11-01 00:40:42.480275056 +0000 UTC m=+17.395199592" observedRunningTime="2025-11-01 00:40:42.737690411 +0000 UTC m=+17.652614961" watchObservedRunningTime="2025-11-01 00:40:42.737689414 +0000 UTC m=+17.652613958" Nov 1 00:40:42.750149 env[1194]: time="2025-11-01T00:40:42.750004021Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\"" Nov 1 00:40:42.754358 env[1194]: time="2025-11-01T00:40:42.753413387Z" level=info msg="StartContainer for \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\"" Nov 1 00:40:42.778286 systemd[1]: Started cri-containerd-b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c.scope. Nov 1 00:40:42.842676 systemd[1]: cri-containerd-b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c.scope: Deactivated successfully. Nov 1 00:40:42.845748 env[1194]: time="2025-11-01T00:40:42.845678137Z" level=info msg="StartContainer for \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\" returns successfully" Nov 1 00:40:42.909004 env[1194]: time="2025-11-01T00:40:42.908928873Z" level=info msg="shim disconnected" id=b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c Nov 1 00:40:42.909379 env[1194]: time="2025-11-01T00:40:42.909336991Z" level=warning msg="cleaning up after shim disconnected" id=b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c namespace=k8s.io Nov 1 00:40:42.909559 env[1194]: time="2025-11-01T00:40:42.909535807Z" level=info msg="cleaning up dead shim" Nov 1 00:40:42.931184 env[1194]: time="2025-11-01T00:40:42.931113792Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2556 runtime=io.containerd.runc.v2\n" Nov 1 00:40:43.735557 kubelet[1900]: E1101 00:40:43.735511 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:43.740339 kubelet[1900]: E1101 00:40:43.736477 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:43.752332 env[1194]: time="2025-11-01T00:40:43.752258425Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:40:43.774237 env[1194]: time="2025-11-01T00:40:43.774163501Z" level=info msg="CreateContainer within sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\"" Nov 1 00:40:43.775224 env[1194]: time="2025-11-01T00:40:43.775181879Z" level=info msg="StartContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\"" Nov 1 00:40:43.832513 systemd[1]: Started cri-containerd-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6.scope. Nov 1 00:40:43.917840 env[1194]: time="2025-11-01T00:40:43.917702377Z" level=info msg="StartContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" returns successfully" Nov 1 00:40:44.086651 kubelet[1900]: I1101 00:40:44.086524 1900 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:40:44.139702 systemd[1]: Created slice kubepods-burstable-poda0b47612_42e7_41ae_a77a_73a0d21d2624.slice. Nov 1 00:40:44.154501 systemd[1]: Created slice kubepods-burstable-podcb80474c_fb9e_4f39_9a09_41918c63c865.slice. Nov 1 00:40:44.236163 kubelet[1900]: I1101 00:40:44.236062 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mzk\" (UniqueName: \"kubernetes.io/projected/a0b47612-42e7-41ae-a77a-73a0d21d2624-kube-api-access-g2mzk\") pod \"coredns-66bc5c9577-g4xnl\" (UID: \"a0b47612-42e7-41ae-a77a-73a0d21d2624\") " pod="kube-system/coredns-66bc5c9577-g4xnl" Nov 1 00:40:44.236521 kubelet[1900]: I1101 00:40:44.236497 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80474c-fb9e-4f39-9a09-41918c63c865-config-volume\") pod \"coredns-66bc5c9577-5qg5c\" (UID: \"cb80474c-fb9e-4f39-9a09-41918c63c865\") " pod="kube-system/coredns-66bc5c9577-5qg5c" Nov 1 00:40:44.236670 kubelet[1900]: I1101 00:40:44.236651 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0b47612-42e7-41ae-a77a-73a0d21d2624-config-volume\") pod \"coredns-66bc5c9577-g4xnl\" (UID: \"a0b47612-42e7-41ae-a77a-73a0d21d2624\") " pod="kube-system/coredns-66bc5c9577-g4xnl" Nov 1 00:40:44.236828 kubelet[1900]: I1101 00:40:44.236808 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzhq\" (UniqueName: \"kubernetes.io/projected/cb80474c-fb9e-4f39-9a09-41918c63c865-kube-api-access-hmzhq\") pod \"coredns-66bc5c9577-5qg5c\" (UID: \"cb80474c-fb9e-4f39-9a09-41918c63c865\") " pod="kube-system/coredns-66bc5c9577-5qg5c" Nov 1 00:40:44.323936 systemd[1]: run-containerd-runc-k8s.io-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6-runc.t9trQv.mount: Deactivated successfully. Nov 1 00:40:44.449878 kubelet[1900]: E1101 00:40:44.449728 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:44.451850 env[1194]: time="2025-11-01T00:40:44.451799553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4xnl,Uid:a0b47612-42e7-41ae-a77a-73a0d21d2624,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:44.464596 kubelet[1900]: E1101 00:40:44.464470 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:44.465491 env[1194]: time="2025-11-01T00:40:44.465440731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5qg5c,Uid:cb80474c-fb9e-4f39-9a09-41918c63c865,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:44.741296 kubelet[1900]: E1101 00:40:44.741254 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:44.785650 kubelet[1900]: I1101 00:40:44.785578 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tbb8q" podStartSLOduration=5.052963267 podStartE2EDuration="14.785558286s" podCreationTimestamp="2025-11-01 00:40:30 +0000 UTC" firstStartedPulling="2025-11-01 00:40:30.574844867 +0000 UTC m=+5.489769413" lastFinishedPulling="2025-11-01 00:40:40.307439909 +0000 UTC m=+15.222364432" observedRunningTime="2025-11-01 00:40:44.783688429 +0000 UTC m=+19.698612974" watchObservedRunningTime="2025-11-01 00:40:44.785558286 +0000 UTC m=+19.700482831" Nov 1 00:40:45.744095 kubelet[1900]: E1101 00:40:45.743699 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:46.601117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:40:46.601297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:40:46.599311 systemd-networkd[1005]: cilium_host: Link UP Nov 1 00:40:46.601750 systemd-networkd[1005]: cilium_net: Link UP Nov 1 00:40:46.602032 systemd-networkd[1005]: cilium_net: Gained carrier Nov 1 00:40:46.603451 systemd-networkd[1005]: cilium_host: Gained carrier Nov 1 00:40:46.618144 systemd-networkd[1005]: cilium_net: Gained IPv6LL Nov 1 00:40:46.745626 kubelet[1900]: E1101 00:40:46.745581 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:46.752316 systemd-networkd[1005]: cilium_vxlan: Link UP Nov 1 00:40:46.752326 systemd-networkd[1005]: cilium_vxlan: Gained carrier Nov 1 00:40:47.189036 kernel: NET: Registered PF_ALG protocol family Nov 1 00:40:47.421520 systemd-networkd[1005]: cilium_host: Gained IPv6LL Nov 1 00:40:48.116772 systemd-networkd[1005]: lxc_health: Link UP Nov 1 00:40:48.122021 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:40:48.122346 systemd-networkd[1005]: lxc_health: Gained carrier Nov 1 00:40:48.451474 systemd-networkd[1005]: cilium_vxlan: Gained IPv6LL Nov 1 00:40:48.458286 kubelet[1900]: E1101 00:40:48.457763 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:48.506598 systemd-networkd[1005]: lxcd64335de6ea5: Link UP Nov 1 00:40:48.524167 kernel: eth0: renamed from tmp8fec8 Nov 1 00:40:48.530148 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd64335de6ea5: link becomes ready Nov 1 00:40:48.530257 systemd-networkd[1005]: lxcd64335de6ea5: Gained carrier Nov 1 00:40:48.542280 systemd-networkd[1005]: lxce026c151c5d4: Link UP Nov 1 00:40:48.545112 kernel: eth0: renamed from tmp701b1 Nov 1 00:40:48.551597 systemd-networkd[1005]: lxce026c151c5d4: Gained carrier Nov 1 00:40:48.552113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce026c151c5d4: link becomes ready Nov 1 00:40:49.849346 systemd-networkd[1005]: lxce026c151c5d4: Gained IPv6LL Nov 1 00:40:49.977283 systemd-networkd[1005]: lxc_health: Gained IPv6LL Nov 1 00:40:50.233303 systemd-networkd[1005]: lxcd64335de6ea5: Gained IPv6LL Nov 1 00:40:53.348284 env[1194]: time="2025-11-01T00:40:53.348171133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:53.348284 env[1194]: time="2025-11-01T00:40:53.348228422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:53.349149 env[1194]: time="2025-11-01T00:40:53.348243418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:53.349149 env[1194]: time="2025-11-01T00:40:53.348510175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529 pid=3113 runtime=io.containerd.runc.v2 Nov 1 00:40:53.386044 systemd[1]: Started cri-containerd-8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529.scope. Nov 1 00:40:53.405262 systemd[1]: run-containerd-runc-k8s.io-8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529-runc.4HTgJu.mount: Deactivated successfully. Nov 1 00:40:53.457175 env[1194]: time="2025-11-01T00:40:53.457075060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:53.457175 env[1194]: time="2025-11-01T00:40:53.457118386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:53.457175 env[1194]: time="2025-11-01T00:40:53.457129386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:53.457860 env[1194]: time="2025-11-01T00:40:53.457778625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/701b1ff9b5c8d25a14ddc42a610a33676a4d2cde9a5ab5c62d6697d721fbdcca pid=3146 runtime=io.containerd.runc.v2 Nov 1 00:40:53.481208 systemd[1]: Started cri-containerd-701b1ff9b5c8d25a14ddc42a610a33676a4d2cde9a5ab5c62d6697d721fbdcca.scope. Nov 1 00:40:53.548898 env[1194]: time="2025-11-01T00:40:53.548819320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g4xnl,Uid:a0b47612-42e7-41ae-a77a-73a0d21d2624,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529\"" Nov 1 00:40:53.549691 kubelet[1900]: E1101 00:40:53.549657 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:53.555062 env[1194]: time="2025-11-01T00:40:53.555015778Z" level=info msg="CreateContainer within sandbox \"8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:40:53.585426 env[1194]: time="2025-11-01T00:40:53.585367023Z" level=info msg="CreateContainer within sandbox \"8fec8bca1039c8c9d173f83afc12e69894839c50110aeba0667a298bc6eb1529\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd2bc1ae67ad387d855ccb639068f1e33e84de540e58b6c6f477e613ef4da3a3\"" Nov 1 00:40:53.589024 env[1194]: time="2025-11-01T00:40:53.586839406Z" level=info msg="StartContainer for \"bd2bc1ae67ad387d855ccb639068f1e33e84de540e58b6c6f477e613ef4da3a3\"" Nov 1 00:40:53.590780 env[1194]: time="2025-11-01T00:40:53.590245880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5qg5c,Uid:cb80474c-fb9e-4f39-9a09-41918c63c865,Namespace:kube-system,Attempt:0,} returns sandbox id \"701b1ff9b5c8d25a14ddc42a610a33676a4d2cde9a5ab5c62d6697d721fbdcca\"" Nov 1 00:40:53.594599 kubelet[1900]: E1101 00:40:53.591105 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:53.604082 env[1194]: time="2025-11-01T00:40:53.600867220Z" level=info msg="CreateContainer within sandbox \"701b1ff9b5c8d25a14ddc42a610a33676a4d2cde9a5ab5c62d6697d721fbdcca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:40:53.622322 systemd[1]: Started cri-containerd-bd2bc1ae67ad387d855ccb639068f1e33e84de540e58b6c6f477e613ef4da3a3.scope. Nov 1 00:40:53.624786 env[1194]: time="2025-11-01T00:40:53.624745596Z" level=info msg="CreateContainer within sandbox \"701b1ff9b5c8d25a14ddc42a610a33676a4d2cde9a5ab5c62d6697d721fbdcca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ceb520512a94ccaebbf2adec4d91cf83d6f2e682ad625e4b484e7163821909d9\"" Nov 1 00:40:53.627087 env[1194]: time="2025-11-01T00:40:53.626037989Z" level=info msg="StartContainer for \"ceb520512a94ccaebbf2adec4d91cf83d6f2e682ad625e4b484e7163821909d9\"" Nov 1 00:40:53.652350 systemd[1]: Started cri-containerd-ceb520512a94ccaebbf2adec4d91cf83d6f2e682ad625e4b484e7163821909d9.scope. Nov 1 00:40:53.691070 env[1194]: time="2025-11-01T00:40:53.690973857Z" level=info msg="StartContainer for \"bd2bc1ae67ad387d855ccb639068f1e33e84de540e58b6c6f477e613ef4da3a3\" returns successfully" Nov 1 00:40:53.713495 env[1194]: time="2025-11-01T00:40:53.713418072Z" level=info msg="StartContainer for \"ceb520512a94ccaebbf2adec4d91cf83d6f2e682ad625e4b484e7163821909d9\" returns successfully" Nov 1 00:40:53.763284 kubelet[1900]: E1101 00:40:53.762874 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:53.765706 kubelet[1900]: E1101 00:40:53.765673 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:53.818188 kubelet[1900]: I1101 00:40:53.818127 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g4xnl" podStartSLOduration=23.818097395 podStartE2EDuration="23.818097395s" podCreationTimestamp="2025-11-01 00:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:53.817089773 +0000 UTC m=+28.732014514" watchObservedRunningTime="2025-11-01 00:40:53.818097395 +0000 UTC m=+28.733021940" Nov 1 00:40:53.818392 kubelet[1900]: I1101 00:40:53.818226 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5qg5c" podStartSLOduration=23.818222474 podStartE2EDuration="23.818222474s" podCreationTimestamp="2025-11-01 00:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:53.793467203 +0000 UTC m=+28.708391768" watchObservedRunningTime="2025-11-01 00:40:53.818222474 +0000 UTC m=+28.733147018" Nov 1 00:40:54.768871 kubelet[1900]: E1101 00:40:54.768803 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:54.773080 kubelet[1900]: E1101 00:40:54.769950 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:55.771134 kubelet[1900]: E1101 00:40:55.771056 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:55.772224 kubelet[1900]: E1101 00:40:55.772196 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:59.871636 kubelet[1900]: I1101 00:40:59.871580 1900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:40:59.872501 kubelet[1900]: E1101 00:40:59.872408 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:00.783607 kubelet[1900]: E1101 00:41:00.783178 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:11.251352 systemd[1]: Started sshd@5-143.198.72.73:22-139.178.89.65:55868.service. Nov 1 00:41:11.316459 sshd[3274]: Accepted publickey for core from 139.178.89.65 port 55868 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:11.321412 sshd[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:11.333944 systemd-logind[1184]: New session 6 of user core. Nov 1 00:41:11.334315 systemd[1]: Started session-6.scope. Nov 1 00:41:11.574919 sshd[3274]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:11.579576 systemd[1]: sshd@5-143.198.72.73:22-139.178.89.65:55868.service: Deactivated successfully. Nov 1 00:41:11.580606 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:41:11.582086 systemd-logind[1184]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:41:11.583613 systemd-logind[1184]: Removed session 6. Nov 1 00:41:16.584007 systemd[1]: Started sshd@6-143.198.72.73:22-139.178.89.65:50816.service. Nov 1 00:41:16.632110 sshd[3289]: Accepted publickey for core from 139.178.89.65 port 50816 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:16.634904 sshd[3289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:16.645374 systemd[1]: Started session-7.scope. Nov 1 00:41:16.646333 systemd-logind[1184]: New session 7 of user core. Nov 1 00:41:16.820156 sshd[3289]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:16.823613 systemd[1]: sshd@6-143.198.72.73:22-139.178.89.65:50816.service: Deactivated successfully. Nov 1 00:41:16.824475 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:41:16.825803 systemd-logind[1184]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:41:16.826904 systemd-logind[1184]: Removed session 7. Nov 1 00:41:21.828435 systemd[1]: Started sshd@7-143.198.72.73:22-139.178.89.65:50832.service. Nov 1 00:41:21.881506 sshd[3303]: Accepted publickey for core from 139.178.89.65 port 50832 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:21.883677 sshd[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:21.890807 systemd[1]: Started session-8.scope. Nov 1 00:41:21.891867 systemd-logind[1184]: New session 8 of user core. Nov 1 00:41:22.056936 sshd[3303]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:22.061088 systemd[1]: sshd@7-143.198.72.73:22-139.178.89.65:50832.service: Deactivated successfully. Nov 1 00:41:22.061930 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:41:22.063225 systemd-logind[1184]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:41:22.064191 systemd-logind[1184]: Removed session 8. Nov 1 00:41:27.068941 systemd[1]: Started sshd@8-143.198.72.73:22-139.178.89.65:47966.service. Nov 1 00:41:27.143836 sshd[3319]: Accepted publickey for core from 139.178.89.65 port 47966 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:27.145653 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:27.152774 systemd[1]: Started session-9.scope. Nov 1 00:41:27.154108 systemd-logind[1184]: New session 9 of user core. Nov 1 00:41:27.300004 sshd[3319]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:27.303953 systemd[1]: sshd@8-143.198.72.73:22-139.178.89.65:47966.service: Deactivated successfully. Nov 1 00:41:27.305098 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:41:27.307014 systemd-logind[1184]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:41:27.309130 systemd-logind[1184]: Removed session 9. Nov 1 00:41:32.309068 systemd[1]: Started sshd@9-143.198.72.73:22-139.178.89.65:47974.service. Nov 1 00:41:32.357364 sshd[3334]: Accepted publickey for core from 139.178.89.65 port 47974 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:32.359863 sshd[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:32.365844 systemd-logind[1184]: New session 10 of user core. Nov 1 00:41:32.366269 systemd[1]: Started session-10.scope. Nov 1 00:41:32.509856 sshd[3334]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:32.515034 systemd[1]: sshd@9-143.198.72.73:22-139.178.89.65:47974.service: Deactivated successfully. Nov 1 00:41:32.516378 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:41:32.517518 systemd-logind[1184]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:41:32.519650 systemd[1]: Started sshd@10-143.198.72.73:22-139.178.89.65:47990.service. Nov 1 00:41:32.522222 systemd-logind[1184]: Removed session 10. Nov 1 00:41:32.573548 sshd[3346]: Accepted publickey for core from 139.178.89.65 port 47990 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:32.576173 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:32.581636 systemd-logind[1184]: New session 11 of user core. Nov 1 00:41:32.582967 systemd[1]: Started session-11.scope. Nov 1 00:41:32.794804 sshd[3346]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:32.800806 systemd[1]: sshd@10-143.198.72.73:22-139.178.89.65:47990.service: Deactivated successfully. Nov 1 00:41:32.801775 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:41:32.802578 systemd-logind[1184]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:41:32.804317 systemd[1]: Started sshd@11-143.198.72.73:22-139.178.89.65:47996.service. Nov 1 00:41:32.814120 systemd-logind[1184]: Removed session 11. Nov 1 00:41:32.855397 sshd[3355]: Accepted publickey for core from 139.178.89.65 port 47996 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:32.857574 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:32.864769 systemd-logind[1184]: New session 12 of user core. Nov 1 00:41:32.867222 systemd[1]: Started session-12.scope. Nov 1 00:41:33.061053 sshd[3355]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:33.065880 systemd-logind[1184]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:41:33.066731 systemd[1]: sshd@11-143.198.72.73:22-139.178.89.65:47996.service: Deactivated successfully. Nov 1 00:41:33.067670 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:41:33.068954 systemd-logind[1184]: Removed session 12. Nov 1 00:41:38.069766 systemd[1]: Started sshd@12-143.198.72.73:22-139.178.89.65:60790.service. Nov 1 00:41:38.122456 sshd[3368]: Accepted publickey for core from 139.178.89.65 port 60790 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:38.125076 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:38.131665 systemd[1]: Started session-13.scope. Nov 1 00:41:38.132305 systemd-logind[1184]: New session 13 of user core. Nov 1 00:41:38.291232 sshd[3368]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:38.295332 systemd[1]: sshd@12-143.198.72.73:22-139.178.89.65:60790.service: Deactivated successfully. Nov 1 00:41:38.296212 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:41:38.297281 systemd-logind[1184]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:41:38.298727 systemd-logind[1184]: Removed session 13. Nov 1 00:41:38.418116 kubelet[1900]: E1101 00:41:38.417956 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:43.299341 systemd[1]: Started sshd@13-143.198.72.73:22-139.178.89.65:60794.service. Nov 1 00:41:43.348449 sshd[3380]: Accepted publickey for core from 139.178.89.65 port 60794 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:43.350911 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:43.358103 systemd[1]: Started session-14.scope. Nov 1 00:41:43.358134 systemd-logind[1184]: New session 14 of user core. Nov 1 00:41:43.506289 sshd[3380]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:43.515024 systemd[1]: Started sshd@14-143.198.72.73:22-139.178.89.65:60810.service. Nov 1 00:41:43.519165 systemd[1]: sshd@13-143.198.72.73:22-139.178.89.65:60794.service: Deactivated successfully. Nov 1 00:41:43.520375 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:41:43.523735 systemd-logind[1184]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:41:43.526234 systemd-logind[1184]: Removed session 14. Nov 1 00:41:43.571048 sshd[3391]: Accepted publickey for core from 139.178.89.65 port 60810 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:43.572608 sshd[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:43.579330 systemd-logind[1184]: New session 15 of user core. Nov 1 00:41:43.579663 systemd[1]: Started session-15.scope. Nov 1 00:41:43.986804 sshd[3391]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:43.992861 systemd[1]: sshd@14-143.198.72.73:22-139.178.89.65:60810.service: Deactivated successfully. Nov 1 00:41:43.994109 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:41:43.995202 systemd-logind[1184]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:41:43.998009 systemd[1]: Started sshd@15-143.198.72.73:22-139.178.89.65:60826.service. Nov 1 00:41:44.000142 systemd-logind[1184]: Removed session 15. Nov 1 00:41:44.065181 sshd[3402]: Accepted publickey for core from 139.178.89.65 port 60826 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:44.067740 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:44.076573 systemd[1]: Started session-16.scope. Nov 1 00:41:44.077560 systemd-logind[1184]: New session 16 of user core. Nov 1 00:41:44.863705 sshd[3402]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:44.871852 systemd[1]: sshd@15-143.198.72.73:22-139.178.89.65:60826.service: Deactivated successfully. Nov 1 00:41:44.873336 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:41:44.875084 systemd-logind[1184]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:41:44.877565 systemd[1]: Started sshd@16-143.198.72.73:22-139.178.89.65:60838.service. Nov 1 00:41:44.883206 systemd-logind[1184]: Removed session 16. Nov 1 00:41:44.941562 sshd[3417]: Accepted publickey for core from 139.178.89.65 port 60838 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:44.944166 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:44.950541 systemd-logind[1184]: New session 17 of user core. Nov 1 00:41:44.951158 systemd[1]: Started session-17.scope. Nov 1 00:41:45.335450 sshd[3417]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:45.342249 systemd[1]: sshd@16-143.198.72.73:22-139.178.89.65:60838.service: Deactivated successfully. Nov 1 00:41:45.343820 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:41:45.346841 systemd-logind[1184]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:41:45.349418 systemd[1]: Started sshd@17-143.198.72.73:22-139.178.89.65:60848.service. Nov 1 00:41:45.353749 systemd-logind[1184]: Removed session 17. Nov 1 00:41:45.405880 sshd[3427]: Accepted publickey for core from 139.178.89.65 port 60848 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:45.408236 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:45.414733 systemd-logind[1184]: New session 18 of user core. Nov 1 00:41:45.415660 systemd[1]: Started session-18.scope. Nov 1 00:41:45.558889 sshd[3427]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:45.563233 systemd[1]: sshd@17-143.198.72.73:22-139.178.89.65:60848.service: Deactivated successfully. Nov 1 00:41:45.564059 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:41:45.564698 systemd-logind[1184]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:41:45.565852 systemd-logind[1184]: Removed session 18. Nov 1 00:41:50.568440 systemd[1]: Started sshd@18-143.198.72.73:22-139.178.89.65:41338.service. Nov 1 00:41:50.624689 sshd[3441]: Accepted publickey for core from 139.178.89.65 port 41338 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:50.627489 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:50.633787 systemd-logind[1184]: New session 19 of user core. Nov 1 00:41:50.635422 systemd[1]: Started session-19.scope. Nov 1 00:41:50.777526 sshd[3441]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:50.781600 systemd[1]: sshd@18-143.198.72.73:22-139.178.89.65:41338.service: Deactivated successfully. Nov 1 00:41:50.782765 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:41:50.784235 systemd-logind[1184]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:41:50.785700 systemd-logind[1184]: Removed session 19. Nov 1 00:41:51.416459 kubelet[1900]: E1101 00:41:51.416408 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:52.416418 kubelet[1900]: E1101 00:41:52.416359 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:55.419471 kubelet[1900]: E1101 00:41:55.419427 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:41:55.785694 systemd[1]: Started sshd@19-143.198.72.73:22-139.178.89.65:41342.service. Nov 1 00:41:55.837197 sshd[3455]: Accepted publickey for core from 139.178.89.65 port 41342 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:41:55.839113 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:41:55.845198 systemd-logind[1184]: New session 20 of user core. Nov 1 00:41:55.845494 systemd[1]: Started session-20.scope. Nov 1 00:41:55.989329 sshd[3455]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:55.993427 systemd[1]: sshd@19-143.198.72.73:22-139.178.89.65:41342.service: Deactivated successfully. Nov 1 00:41:55.994284 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:41:55.995473 systemd-logind[1184]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:41:55.996389 systemd-logind[1184]: Removed session 20. Nov 1 00:42:00.417274 kubelet[1900]: E1101 00:42:00.417219 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:00.998216 systemd[1]: Started sshd@20-143.198.72.73:22-139.178.89.65:42358.service. Nov 1 00:42:01.053225 sshd[3467]: Accepted publickey for core from 139.178.89.65 port 42358 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:01.056073 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:01.062955 systemd-logind[1184]: New session 21 of user core. Nov 1 00:42:01.064019 systemd[1]: Started session-21.scope. Nov 1 00:42:01.241530 sshd[3467]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:01.245364 systemd-logind[1184]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:42:01.245714 systemd[1]: sshd@20-143.198.72.73:22-139.178.89.65:42358.service: Deactivated successfully. Nov 1 00:42:01.246901 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:42:01.248133 systemd-logind[1184]: Removed session 21. Nov 1 00:42:06.248024 systemd[1]: Started sshd@21-143.198.72.73:22-139.178.89.65:47436.service. Nov 1 00:42:06.304843 sshd[3481]: Accepted publickey for core from 139.178.89.65 port 47436 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:06.308567 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:06.316928 systemd-logind[1184]: New session 22 of user core. Nov 1 00:42:06.318502 systemd[1]: Started session-22.scope. Nov 1 00:42:06.478216 sshd[3481]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:06.481916 systemd-logind[1184]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:42:06.482181 systemd[1]: sshd@21-143.198.72.73:22-139.178.89.65:47436.service: Deactivated successfully. Nov 1 00:42:06.483252 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:42:06.484177 systemd-logind[1184]: Removed session 22. Nov 1 00:42:09.417339 kubelet[1900]: E1101 00:42:09.417301 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:10.417306 kubelet[1900]: E1101 00:42:10.417253 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:11.487837 systemd[1]: Started sshd@22-143.198.72.73:22-139.178.89.65:47452.service. Nov 1 00:42:11.537763 sshd[3493]: Accepted publickey for core from 139.178.89.65 port 47452 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:11.539664 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:11.546049 systemd-logind[1184]: New session 23 of user core. Nov 1 00:42:11.546570 systemd[1]: Started session-23.scope. Nov 1 00:42:11.693360 sshd[3493]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:11.700383 systemd[1]: sshd@22-143.198.72.73:22-139.178.89.65:47452.service: Deactivated successfully. Nov 1 00:42:11.702187 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:42:11.703816 systemd-logind[1184]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:42:11.706174 systemd[1]: Started sshd@23-143.198.72.73:22-139.178.89.65:47456.service. Nov 1 00:42:11.711238 systemd-logind[1184]: Removed session 23. Nov 1 00:42:11.760011 sshd[3505]: Accepted publickey for core from 139.178.89.65 port 47456 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:11.761711 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:11.767620 systemd-logind[1184]: New session 24 of user core. Nov 1 00:42:11.768576 systemd[1]: Started session-24.scope. Nov 1 00:42:13.254609 systemd[1]: run-containerd-runc-k8s.io-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6-runc.0VlYqF.mount: Deactivated successfully. Nov 1 00:42:13.276013 env[1194]: time="2025-11-01T00:42:13.274249206Z" level=info msg="StopContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" with timeout 30 (s)" Nov 1 00:42:13.276672 env[1194]: time="2025-11-01T00:42:13.276629562Z" level=info msg="Stop container \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" with signal terminated" Nov 1 00:42:13.314423 systemd[1]: cri-containerd-eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371.scope: Deactivated successfully. Nov 1 00:42:13.322466 env[1194]: time="2025-11-01T00:42:13.321924086Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:42:13.343310 env[1194]: time="2025-11-01T00:42:13.343251165Z" level=info msg="StopContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" with timeout 2 (s)" Nov 1 00:42:13.343836 env[1194]: time="2025-11-01T00:42:13.343779804Z" level=info msg="Stop container \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" with signal terminated" Nov 1 00:42:13.352921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371-rootfs.mount: Deactivated successfully. Nov 1 00:42:13.360361 systemd-networkd[1005]: lxc_health: Link DOWN Nov 1 00:42:13.360370 systemd-networkd[1005]: lxc_health: Lost carrier Nov 1 00:42:13.363950 env[1194]: time="2025-11-01T00:42:13.363889786Z" level=info msg="shim disconnected" id=eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371 Nov 1 00:42:13.364159 env[1194]: time="2025-11-01T00:42:13.363953111Z" level=warning msg="cleaning up after shim disconnected" id=eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371 namespace=k8s.io Nov 1 00:42:13.364159 env[1194]: time="2025-11-01T00:42:13.363967951Z" level=info msg="cleaning up dead shim" Nov 1 00:42:13.397247 env[1194]: time="2025-11-01T00:42:13.397189261Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3556 runtime=io.containerd.runc.v2\n" Nov 1 00:42:13.401089 env[1194]: time="2025-11-01T00:42:13.400963292Z" level=info msg="StopContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" returns successfully" Nov 1 00:42:13.401943 env[1194]: time="2025-11-01T00:42:13.401903355Z" level=info msg="StopPodSandbox for \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\"" Nov 1 00:42:13.402454 env[1194]: time="2025-11-01T00:42:13.402418461Z" level=info msg="Container to stop \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.405772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961-shm.mount: Deactivated successfully. Nov 1 00:42:13.409247 systemd[1]: cri-containerd-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6.scope: Deactivated successfully. Nov 1 00:42:13.409697 systemd[1]: cri-containerd-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6.scope: Consumed 8.626s CPU time. Nov 1 00:42:13.434728 systemd[1]: cri-containerd-b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961.scope: Deactivated successfully. Nov 1 00:42:13.464101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6-rootfs.mount: Deactivated successfully. Nov 1 00:42:13.471261 env[1194]: time="2025-11-01T00:42:13.471199368Z" level=info msg="shim disconnected" id=3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6 Nov 1 00:42:13.471659 env[1194]: time="2025-11-01T00:42:13.471627628Z" level=warning msg="cleaning up after shim disconnected" id=3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6 namespace=k8s.io Nov 1 00:42:13.471799 env[1194]: time="2025-11-01T00:42:13.471776653Z" level=info msg="cleaning up dead shim" Nov 1 00:42:13.481272 env[1194]: time="2025-11-01T00:42:13.481220040Z" level=info msg="shim disconnected" id=b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961 Nov 1 00:42:13.482586 env[1194]: time="2025-11-01T00:42:13.482546496Z" level=warning msg="cleaning up after shim disconnected" id=b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961 namespace=k8s.io Nov 1 00:42:13.482844 env[1194]: time="2025-11-01T00:42:13.482821052Z" level=info msg="cleaning up dead shim" Nov 1 00:42:13.489466 env[1194]: time="2025-11-01T00:42:13.489417546Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3602 runtime=io.containerd.runc.v2\n" Nov 1 00:42:13.492112 env[1194]: time="2025-11-01T00:42:13.492060140Z" level=info msg="StopContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" returns successfully" Nov 1 00:42:13.493048 env[1194]: time="2025-11-01T00:42:13.492946875Z" level=info msg="StopPodSandbox for \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\"" Nov 1 00:42:13.493285 env[1194]: time="2025-11-01T00:42:13.493251230Z" level=info msg="Container to stop \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.493591 env[1194]: time="2025-11-01T00:42:13.493555110Z" level=info msg="Container to stop \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.493591 env[1194]: time="2025-11-01T00:42:13.493589045Z" level=info msg="Container to stop \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.493682 env[1194]: time="2025-11-01T00:42:13.493602820Z" level=info msg="Container to stop \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.493682 env[1194]: time="2025-11-01T00:42:13.493614398Z" level=info msg="Container to stop \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:13.501207 env[1194]: time="2025-11-01T00:42:13.501139435Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3611 runtime=io.containerd.runc.v2\n" Nov 1 00:42:13.501584 env[1194]: time="2025-11-01T00:42:13.501554845Z" level=info msg="TearDown network for sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" successfully" Nov 1 00:42:13.501644 env[1194]: time="2025-11-01T00:42:13.501583781Z" level=info msg="StopPodSandbox for \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" returns successfully" Nov 1 00:42:13.503359 systemd[1]: cri-containerd-988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020.scope: Deactivated successfully. Nov 1 00:42:13.543173 env[1194]: time="2025-11-01T00:42:13.541401167Z" level=info msg="shim disconnected" id=988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020 Nov 1 00:42:13.543173 env[1194]: time="2025-11-01T00:42:13.541469576Z" level=warning msg="cleaning up after shim disconnected" id=988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020 namespace=k8s.io Nov 1 00:42:13.543173 env[1194]: time="2025-11-01T00:42:13.541484195Z" level=info msg="cleaning up dead shim" Nov 1 00:42:13.554803 env[1194]: time="2025-11-01T00:42:13.554635899Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3650 runtime=io.containerd.runc.v2\n" Nov 1 00:42:13.555299 env[1194]: time="2025-11-01T00:42:13.555247094Z" level=info msg="TearDown network for sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" successfully" Nov 1 00:42:13.555377 env[1194]: time="2025-11-01T00:42:13.555304307Z" level=info msg="StopPodSandbox for \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" returns successfully" Nov 1 00:42:13.590905 kubelet[1900]: I1101 00:42:13.590822 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9grns\" (UniqueName: \"kubernetes.io/projected/7705ab90-815a-4f68-98ad-343a00bbfbaf-kube-api-access-9grns\") pod \"7705ab90-815a-4f68-98ad-343a00bbfbaf\" (UID: \"7705ab90-815a-4f68-98ad-343a00bbfbaf\") " Nov 1 00:42:13.591652 kubelet[1900]: I1101 00:42:13.590922 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7705ab90-815a-4f68-98ad-343a00bbfbaf-cilium-config-path\") pod \"7705ab90-815a-4f68-98ad-343a00bbfbaf\" (UID: \"7705ab90-815a-4f68-98ad-343a00bbfbaf\") " Nov 1 00:42:13.604722 kubelet[1900]: I1101 00:42:13.604659 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7705ab90-815a-4f68-98ad-343a00bbfbaf-kube-api-access-9grns" (OuterVolumeSpecName: "kube-api-access-9grns") pod "7705ab90-815a-4f68-98ad-343a00bbfbaf" (UID: "7705ab90-815a-4f68-98ad-343a00bbfbaf"). InnerVolumeSpecName "kube-api-access-9grns". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:42:13.605282 kubelet[1900]: I1101 00:42:13.605238 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7705ab90-815a-4f68-98ad-343a00bbfbaf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7705ab90-815a-4f68-98ad-343a00bbfbaf" (UID: "7705ab90-815a-4f68-98ad-343a00bbfbaf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:42:13.691853 kubelet[1900]: I1101 00:42:13.691733 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-lib-modules\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.692230 kubelet[1900]: I1101 00:42:13.692186 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-xtables-lock\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.692431 kubelet[1900]: I1101 00:42:13.692399 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-config-path\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.692577 kubelet[1900]: I1101 00:42:13.692557 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-bpf-maps\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.692727 kubelet[1900]: I1101 00:42:13.692705 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-run\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.692874 kubelet[1900]: I1101 00:42:13.692855 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-cgroup\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693046 kubelet[1900]: I1101 00:42:13.693025 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cni-path\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693199 kubelet[1900]: I1101 00:42:13.693179 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-etc-cni-netd\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693352 kubelet[1900]: I1101 00:42:13.693331 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfdf\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-kube-api-access-jrfdf\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693468 kubelet[1900]: I1101 00:42:13.693453 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-kernel\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693597 kubelet[1900]: I1101 00:42:13.693579 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-net\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693701 kubelet[1900]: I1101 00:42:13.693686 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9384eea-0d2c-4e02-9c7d-3022d6148970-clustermesh-secrets\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.693799 kubelet[1900]: I1101 00:42:13.693786 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-hostproc\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.694086 kubelet[1900]: I1101 00:42:13.694071 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-hubble-tls\") pod \"b9384eea-0d2c-4e02-9c7d-3022d6148970\" (UID: \"b9384eea-0d2c-4e02-9c7d-3022d6148970\") " Nov 1 00:42:13.694214 kubelet[1900]: I1101 00:42:13.694200 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7705ab90-815a-4f68-98ad-343a00bbfbaf-cilium-config-path\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.694300 kubelet[1900]: I1101 00:42:13.694287 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9grns\" (UniqueName: \"kubernetes.io/projected/7705ab90-815a-4f68-98ad-343a00bbfbaf-kube-api-access-9grns\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.694555 kubelet[1900]: I1101 00:42:13.691893 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.694661 kubelet[1900]: I1101 00:42:13.692231 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.694661 kubelet[1900]: I1101 00:42:13.694494 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:42:13.694661 kubelet[1900]: I1101 00:42:13.694524 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.694661 kubelet[1900]: I1101 00:42:13.694594 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.694661 kubelet[1900]: I1101 00:42:13.694611 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.695266 kubelet[1900]: I1101 00:42:13.694624 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.695266 kubelet[1900]: I1101 00:42:13.694641 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.695266 kubelet[1900]: I1101 00:42:13.694657 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.696975 kubelet[1900]: I1101 00:42:13.696919 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.697350 kubelet[1900]: I1101 00:42:13.697322 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:13.698900 kubelet[1900]: I1101 00:42:13.698866 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:42:13.699154 kubelet[1900]: I1101 00:42:13.699109 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-kube-api-access-jrfdf" (OuterVolumeSpecName: "kube-api-access-jrfdf") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "kube-api-access-jrfdf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:42:13.702416 kubelet[1900]: I1101 00:42:13.702377 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9384eea-0d2c-4e02-9c7d-3022d6148970-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9384eea-0d2c-4e02-9c7d-3022d6148970" (UID: "b9384eea-0d2c-4e02-9c7d-3022d6148970"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:42:13.794752 kubelet[1900]: I1101 00:42:13.794543 1900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-hostproc\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.794995 kubelet[1900]: I1101 00:42:13.794960 1900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-hubble-tls\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795085 kubelet[1900]: I1101 00:42:13.795072 1900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-lib-modules\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795150 kubelet[1900]: I1101 00:42:13.795138 1900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-xtables-lock\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795217 kubelet[1900]: I1101 00:42:13.795206 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-config-path\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795282 kubelet[1900]: I1101 00:42:13.795269 1900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-bpf-maps\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795353 kubelet[1900]: I1101 00:42:13.795342 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-run\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795438 kubelet[1900]: I1101 00:42:13.795412 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cilium-cgroup\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795510 kubelet[1900]: I1101 00:42:13.795499 1900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-cni-path\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795571 kubelet[1900]: I1101 00:42:13.795560 1900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-etc-cni-netd\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795637 kubelet[1900]: I1101 00:42:13.795625 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrfdf\" (UniqueName: \"kubernetes.io/projected/b9384eea-0d2c-4e02-9c7d-3022d6148970-kube-api-access-jrfdf\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795724 kubelet[1900]: I1101 00:42:13.795705 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795805 kubelet[1900]: I1101 00:42:13.795791 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9384eea-0d2c-4e02-9c7d-3022d6148970-host-proc-sys-net\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.795868 kubelet[1900]: I1101 00:42:13.795857 1900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9384eea-0d2c-4e02-9c7d-3022d6148970-clustermesh-secrets\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:13.972592 kubelet[1900]: I1101 00:42:13.972543 1900 scope.go:117] "RemoveContainer" containerID="eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371" Nov 1 00:42:13.974287 systemd[1]: Removed slice kubepods-besteffort-pod7705ab90_815a_4f68_98ad_343a00bbfbaf.slice. Nov 1 00:42:13.979863 env[1194]: time="2025-11-01T00:42:13.979640071Z" level=info msg="RemoveContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\"" Nov 1 00:42:13.985607 env[1194]: time="2025-11-01T00:42:13.985529707Z" level=info msg="RemoveContainer for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" returns successfully" Nov 1 00:42:13.986289 kubelet[1900]: I1101 00:42:13.986244 1900 scope.go:117] "RemoveContainer" containerID="eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371" Nov 1 00:42:13.986624 env[1194]: time="2025-11-01T00:42:13.986519700Z" level=error msg="ContainerStatus for \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\": not found" Nov 1 00:42:13.986953 kubelet[1900]: E1101 00:42:13.986912 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\": not found" containerID="eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371" Nov 1 00:42:13.987103 kubelet[1900]: I1101 00:42:13.986969 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371"} err="failed to get container status \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\": rpc error: code = NotFound desc = an error occurred when try to find container \"eff442644f17ebf5a378930c0aee4b98adc700bef3149170551774dce915e371\": not found" Nov 1 00:42:13.987103 kubelet[1900]: I1101 00:42:13.987095 1900 scope.go:117] "RemoveContainer" containerID="3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6" Nov 1 00:42:13.988825 env[1194]: time="2025-11-01T00:42:13.988777655Z" level=info msg="RemoveContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\"" Nov 1 00:42:13.994343 env[1194]: time="2025-11-01T00:42:13.993289198Z" level=info msg="RemoveContainer for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" returns successfully" Nov 1 00:42:13.995106 kubelet[1900]: I1101 00:42:13.995062 1900 scope.go:117] "RemoveContainer" containerID="b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c" Nov 1 00:42:13.997083 systemd[1]: Removed slice kubepods-burstable-podb9384eea_0d2c_4e02_9c7d_3022d6148970.slice. Nov 1 00:42:13.997250 systemd[1]: kubepods-burstable-podb9384eea_0d2c_4e02_9c7d_3022d6148970.slice: Consumed 8.756s CPU time. Nov 1 00:42:14.002434 env[1194]: time="2025-11-01T00:42:14.002106496Z" level=info msg="RemoveContainer for \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\"" Nov 1 00:42:14.008779 env[1194]: time="2025-11-01T00:42:14.008705207Z" level=info msg="RemoveContainer for \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\" returns successfully" Nov 1 00:42:14.014735 kubelet[1900]: I1101 00:42:14.012189 1900 scope.go:117] "RemoveContainer" containerID="358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1" Nov 1 00:42:14.027091 env[1194]: time="2025-11-01T00:42:14.027025648Z" level=info msg="RemoveContainer for \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\"" Nov 1 00:42:14.042033 env[1194]: time="2025-11-01T00:42:14.039039085Z" level=info msg="RemoveContainer for \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\" returns successfully" Nov 1 00:42:14.042274 kubelet[1900]: I1101 00:42:14.039423 1900 scope.go:117] "RemoveContainer" containerID="e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea" Nov 1 00:42:14.052732 env[1194]: time="2025-11-01T00:42:14.052565147Z" level=info msg="RemoveContainer for \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\"" Nov 1 00:42:14.056108 env[1194]: time="2025-11-01T00:42:14.056046046Z" level=info msg="RemoveContainer for \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\" returns successfully" Nov 1 00:42:14.059308 kubelet[1900]: I1101 00:42:14.059218 1900 scope.go:117] "RemoveContainer" containerID="313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645" Nov 1 00:42:14.061314 env[1194]: time="2025-11-01T00:42:14.061265744Z" level=info msg="RemoveContainer for \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\"" Nov 1 00:42:14.064332 env[1194]: time="2025-11-01T00:42:14.064255920Z" level=info msg="RemoveContainer for \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\" returns successfully" Nov 1 00:42:14.064799 kubelet[1900]: I1101 00:42:14.064760 1900 scope.go:117] "RemoveContainer" containerID="3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6" Nov 1 00:42:14.065282 env[1194]: time="2025-11-01T00:42:14.065218592Z" level=error msg="ContainerStatus for \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\": not found" Nov 1 00:42:14.065742 kubelet[1900]: E1101 00:42:14.065706 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\": not found" containerID="3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6" Nov 1 00:42:14.065836 kubelet[1900]: I1101 00:42:14.065752 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6"} err="failed to get container status \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cca5a3fe690900938b5a62c4c83dff0f8636ef26d28b5ac87b787f0821a10f6\": not found" Nov 1 00:42:14.065836 kubelet[1900]: I1101 00:42:14.065787 1900 scope.go:117] "RemoveContainer" containerID="b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c" Nov 1 00:42:14.066199 env[1194]: time="2025-11-01T00:42:14.066127311Z" level=error msg="ContainerStatus for \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\": not found" Nov 1 00:42:14.066653 kubelet[1900]: E1101 00:42:14.066624 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\": not found" containerID="b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c" Nov 1 00:42:14.066888 kubelet[1900]: I1101 00:42:14.066667 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c"} err="failed to get container status \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b487eeac043cbe4214baa020ff630cd930d3ad91525ded12ca75c70606e0f83c\": not found" Nov 1 00:42:14.066888 kubelet[1900]: I1101 00:42:14.066708 1900 scope.go:117] "RemoveContainer" containerID="358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1" Nov 1 00:42:14.067221 env[1194]: time="2025-11-01T00:42:14.067168043Z" level=error msg="ContainerStatus for \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\": not found" Nov 1 00:42:14.067507 kubelet[1900]: E1101 00:42:14.067475 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\": not found" containerID="358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1" Nov 1 00:42:14.067584 kubelet[1900]: I1101 00:42:14.067514 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1"} err="failed to get container status \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"358aba11262c8c9a75b1283e1a7f08b6bc34bfc0576d586c34e6805be7328dc1\": not found" Nov 1 00:42:14.067584 kubelet[1900]: I1101 00:42:14.067539 1900 scope.go:117] "RemoveContainer" containerID="e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea" Nov 1 00:42:14.067874 env[1194]: time="2025-11-01T00:42:14.067812663Z" level=error msg="ContainerStatus for \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\": not found" Nov 1 00:42:14.068082 kubelet[1900]: E1101 00:42:14.068056 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\": not found" containerID="e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea" Nov 1 00:42:14.068148 kubelet[1900]: I1101 00:42:14.068091 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea"} err="failed to get container status \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"e22ca9c41bb97f6acc37dd654bc0ef63daae806126bc0014129950b5313f40ea\": not found" Nov 1 00:42:14.068148 kubelet[1900]: I1101 00:42:14.068112 1900 scope.go:117] "RemoveContainer" containerID="313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645" Nov 1 00:42:14.068399 env[1194]: time="2025-11-01T00:42:14.068348576Z" level=error msg="ContainerStatus for \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\": not found" Nov 1 00:42:14.069074 kubelet[1900]: E1101 00:42:14.069039 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\": not found" containerID="313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645" Nov 1 00:42:14.069074 kubelet[1900]: I1101 00:42:14.069067 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645"} err="failed to get container status \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\": rpc error: code = NotFound desc = an error occurred when try to find container \"313ac72dd075ac8b75c94fa39738cfd929771c5e394ae93e4cb419f606c7c645\": not found" Nov 1 00:42:14.240681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961-rootfs.mount: Deactivated successfully. Nov 1 00:42:14.240821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020-rootfs.mount: Deactivated successfully. Nov 1 00:42:14.240894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020-shm.mount: Deactivated successfully. Nov 1 00:42:14.240977 systemd[1]: var-lib-kubelet-pods-7705ab90\x2d815a\x2d4f68\x2d98ad\x2d343a00bbfbaf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9grns.mount: Deactivated successfully. Nov 1 00:42:14.241083 systemd[1]: var-lib-kubelet-pods-b9384eea\x2d0d2c\x2d4e02\x2d9c7d\x2d3022d6148970-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrfdf.mount: Deactivated successfully. Nov 1 00:42:14.241160 systemd[1]: var-lib-kubelet-pods-b9384eea\x2d0d2c\x2d4e02\x2d9c7d\x2d3022d6148970-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:42:14.241229 systemd[1]: var-lib-kubelet-pods-b9384eea\x2d0d2c\x2d4e02\x2d9c7d\x2d3022d6148970-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:42:15.140500 sshd[3505]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:15.144245 systemd[1]: sshd@23-143.198.72.73:22-139.178.89.65:47456.service: Deactivated successfully. Nov 1 00:42:15.149140 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:42:15.152219 systemd-logind[1184]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:42:15.154468 systemd[1]: Started sshd@24-143.198.72.73:22-139.178.89.65:47468.service. Nov 1 00:42:15.156515 systemd-logind[1184]: Removed session 24. Nov 1 00:42:15.215592 sshd[3670]: Accepted publickey for core from 139.178.89.65 port 47468 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:15.217482 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:15.224376 systemd[1]: Started session-25.scope. Nov 1 00:42:15.225287 systemd-logind[1184]: New session 25 of user core. Nov 1 00:42:15.420099 kubelet[1900]: I1101 00:42:15.419975 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7705ab90-815a-4f68-98ad-343a00bbfbaf" path="/var/lib/kubelet/pods/7705ab90-815a-4f68-98ad-343a00bbfbaf/volumes" Nov 1 00:42:15.421186 kubelet[1900]: I1101 00:42:15.421155 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9384eea-0d2c-4e02-9c7d-3022d6148970" path="/var/lib/kubelet/pods/b9384eea-0d2c-4e02-9c7d-3022d6148970/volumes" Nov 1 00:42:15.645255 kubelet[1900]: E1101 00:42:15.645205 1900 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:42:16.105263 sshd[3670]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:16.121316 systemd[1]: Started sshd@25-143.198.72.73:22-139.178.89.65:38256.service. Nov 1 00:42:16.122458 systemd[1]: sshd@24-143.198.72.73:22-139.178.89.65:47468.service: Deactivated successfully. Nov 1 00:42:16.123788 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:42:16.131376 systemd-logind[1184]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:42:16.133631 systemd-logind[1184]: Removed session 25. Nov 1 00:42:16.160367 systemd[1]: Created slice kubepods-burstable-pod03cf6937_eb58_4c53_b503_b4b3a2a05bd4.slice. Nov 1 00:42:16.197155 sshd[3679]: Accepted publickey for core from 139.178.89.65 port 38256 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:16.200391 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:16.206559 systemd-logind[1184]: New session 26 of user core. Nov 1 00:42:16.208247 systemd[1]: Started session-26.scope. Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233441 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-etc-cni-netd\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233520 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cni-path\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233570 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-xtables-lock\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233601 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-config-path\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233658 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-bpf-maps\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234059 kubelet[1900]: I1101 00:42:16.233692 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-lib-modules\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233714 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-net\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233736 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-run\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233756 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-ipsec-secrets\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233772 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hubble-tls\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233831 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hostproc\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234588 kubelet[1900]: I1101 00:42:16.233861 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-clustermesh-secrets\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234919 kubelet[1900]: I1101 00:42:16.233903 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-kernel\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234919 kubelet[1900]: I1101 00:42:16.233932 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-cgroup\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.234919 kubelet[1900]: I1101 00:42:16.233972 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxrsd\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-kube-api-access-zxrsd\") pod \"cilium-pcfvk\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " pod="kube-system/cilium-pcfvk" Nov 1 00:42:16.467394 kubelet[1900]: E1101 00:42:16.467255 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:16.470161 env[1194]: time="2025-11-01T00:42:16.470048719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcfvk,Uid:03cf6937-eb58-4c53-b503-b4b3a2a05bd4,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:16.491307 env[1194]: time="2025-11-01T00:42:16.491213083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:16.491307 env[1194]: time="2025-11-01T00:42:16.491307879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:16.497756 env[1194]: time="2025-11-01T00:42:16.497639313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:16.498114 env[1194]: time="2025-11-01T00:42:16.498057002Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9 pid=3700 runtime=io.containerd.runc.v2 Nov 1 00:42:16.500089 sshd[3679]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:16.509602 systemd[1]: sshd@25-143.198.72.73:22-139.178.89.65:38256.service: Deactivated successfully. Nov 1 00:42:16.510338 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:42:16.514954 systemd[1]: Started sshd@26-143.198.72.73:22-139.178.89.65:38266.service. Nov 1 00:42:16.516626 systemd-logind[1184]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:42:16.518295 systemd-logind[1184]: Removed session 26. Nov 1 00:42:16.540794 systemd[1]: Started cri-containerd-a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9.scope. Nov 1 00:42:16.583522 sshd[3718]: Accepted publickey for core from 139.178.89.65 port 38266 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:16.585396 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:16.591088 systemd[1]: Started session-27.scope. Nov 1 00:42:16.592382 systemd-logind[1184]: New session 27 of user core. Nov 1 00:42:16.609388 env[1194]: time="2025-11-01T00:42:16.609338406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcfvk,Uid:03cf6937-eb58-4c53-b503-b4b3a2a05bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\"" Nov 1 00:42:16.610725 kubelet[1900]: E1101 00:42:16.610467 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:16.617037 env[1194]: time="2025-11-01T00:42:16.616967999Z" level=info msg="CreateContainer within sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:42:16.626369 env[1194]: time="2025-11-01T00:42:16.626310695Z" level=info msg="CreateContainer within sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\"" Nov 1 00:42:16.628754 env[1194]: time="2025-11-01T00:42:16.627171676Z" level=info msg="StartContainer for \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\"" Nov 1 00:42:16.644944 systemd[1]: Started cri-containerd-e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd.scope. Nov 1 00:42:16.665421 systemd[1]: cri-containerd-e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd.scope: Deactivated successfully. Nov 1 00:42:16.679455 env[1194]: time="2025-11-01T00:42:16.679396367Z" level=info msg="shim disconnected" id=e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd Nov 1 00:42:16.679820 env[1194]: time="2025-11-01T00:42:16.679797328Z" level=warning msg="cleaning up after shim disconnected" id=e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd namespace=k8s.io Nov 1 00:42:16.679914 env[1194]: time="2025-11-01T00:42:16.679898832Z" level=info msg="cleaning up dead shim" Nov 1 00:42:16.692914 env[1194]: time="2025-11-01T00:42:16.692847614Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3765 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:42:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:42:16.693729 env[1194]: time="2025-11-01T00:42:16.693565041Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Nov 1 00:42:16.695137 env[1194]: time="2025-11-01T00:42:16.694195051Z" level=error msg="Failed to pipe stderr of container \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\"" error="reading from a closed fifo" Nov 1 00:42:16.695336 env[1194]: time="2025-11-01T00:42:16.695057011Z" level=error msg="Failed to pipe stdout of container \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\"" error="reading from a closed fifo" Nov 1 00:42:16.697745 env[1194]: time="2025-11-01T00:42:16.697643516Z" level=error msg="StartContainer for \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:42:16.698454 kubelet[1900]: E1101 00:42:16.698367 1900 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd" Nov 1 00:42:16.699006 kubelet[1900]: E1101 00:42:16.698533 1900 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-pcfvk_kube-system(03cf6937-eb58-4c53-b503-b4b3a2a05bd4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 00:42:16.699141 kubelet[1900]: E1101 00:42:16.699050 1900 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pcfvk" podUID="03cf6937-eb58-4c53-b503-b4b3a2a05bd4" Nov 1 00:42:16.999570 env[1194]: time="2025-11-01T00:42:16.999242003Z" level=info msg="StopPodSandbox for \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\"" Nov 1 00:42:16.999570 env[1194]: time="2025-11-01T00:42:16.999357399Z" level=info msg="Container to stop \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:42:17.032442 systemd[1]: cri-containerd-a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9.scope: Deactivated successfully. Nov 1 00:42:17.073680 env[1194]: time="2025-11-01T00:42:17.073611794Z" level=info msg="shim disconnected" id=a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9 Nov 1 00:42:17.073680 env[1194]: time="2025-11-01T00:42:17.073680671Z" level=warning msg="cleaning up after shim disconnected" id=a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9 namespace=k8s.io Nov 1 00:42:17.073951 env[1194]: time="2025-11-01T00:42:17.073695468Z" level=info msg="cleaning up dead shim" Nov 1 00:42:17.091804 env[1194]: time="2025-11-01T00:42:17.091728025Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3798 runtime=io.containerd.runc.v2\n" Nov 1 00:42:17.092264 env[1194]: time="2025-11-01T00:42:17.092222901Z" level=info msg="TearDown network for sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" successfully" Nov 1 00:42:17.092336 env[1194]: time="2025-11-01T00:42:17.092263796Z" level=info msg="StopPodSandbox for \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" returns successfully" Nov 1 00:42:17.240441 kubelet[1900]: I1101 00:42:17.240372 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-run\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240441 kubelet[1900]: I1101 00:42:17.240441 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hostproc\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240472 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-kernel\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240510 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hubble-tls\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240539 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cni-path\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240568 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-net\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240593 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-xtables-lock\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240696 kubelet[1900]: I1101 00:42:17.240636 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-ipsec-secrets\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240663 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-clustermesh-secrets\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240702 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-cgroup\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240728 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-etc-cni-netd\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240750 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-lib-modules\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240783 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-config-path\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.240934 kubelet[1900]: I1101 00:42:17.240806 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-bpf-maps\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.241168 kubelet[1900]: I1101 00:42:17.240838 1900 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxrsd\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-kube-api-access-zxrsd\") pod \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\" (UID: \"03cf6937-eb58-4c53-b503-b4b3a2a05bd4\") " Nov 1 00:42:17.241344 kubelet[1900]: I1101 00:42:17.241312 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.241462 kubelet[1900]: I1101 00:42:17.241446 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.241545 kubelet[1900]: I1101 00:42:17.241532 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hostproc" (OuterVolumeSpecName: "hostproc") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.241626 kubelet[1900]: I1101 00:42:17.241612 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.245290 kubelet[1900]: I1101 00:42:17.245246 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:42:17.245545 kubelet[1900]: I1101 00:42:17.245454 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cni-path" (OuterVolumeSpecName: "cni-path") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.245545 kubelet[1900]: I1101 00:42:17.245491 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.245654 kubelet[1900]: I1101 00:42:17.245583 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.246261 kubelet[1900]: I1101 00:42:17.246229 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.251382 kubelet[1900]: I1101 00:42:17.249718 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:42:17.251382 kubelet[1900]: I1101 00:42:17.249735 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.251382 kubelet[1900]: I1101 00:42:17.249762 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:42:17.251382 kubelet[1900]: I1101 00:42:17.250430 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:42:17.251382 kubelet[1900]: I1101 00:42:17.250510 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-kube-api-access-zxrsd" (OuterVolumeSpecName: "kube-api-access-zxrsd") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "kube-api-access-zxrsd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:42:17.255118 kubelet[1900]: I1101 00:42:17.255056 1900 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03cf6937-eb58-4c53-b503-b4b3a2a05bd4" (UID: "03cf6937-eb58-4c53-b503-b4b3a2a05bd4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:42:17.344847 kubelet[1900]: I1101 00:42:17.344791 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxrsd\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-kube-api-access-zxrsd\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345159 kubelet[1900]: I1101 00:42:17.345123 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-run\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345303 kubelet[1900]: I1101 00:42:17.345283 1900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hostproc\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345404 kubelet[1900]: I1101 00:42:17.345390 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345508 kubelet[1900]: I1101 00:42:17.345490 1900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-hubble-tls\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345597 kubelet[1900]: I1101 00:42:17.345582 1900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cni-path\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345689 kubelet[1900]: I1101 00:42:17.345676 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-host-proc-sys-net\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345799 kubelet[1900]: I1101 00:42:17.345781 1900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-xtables-lock\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.345900 kubelet[1900]: I1101 00:42:17.345886 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.346001 kubelet[1900]: I1101 00:42:17.345977 1900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-clustermesh-secrets\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.346000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9-shm.mount: Deactivated successfully. Nov 1 00:42:17.346153 systemd[1]: var-lib-kubelet-pods-03cf6937\x2deb58\x2d4c53\x2db503\x2db4b3a2a05bd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzxrsd.mount: Deactivated successfully. Nov 1 00:42:17.346249 systemd[1]: var-lib-kubelet-pods-03cf6937\x2deb58\x2d4c53\x2db503\x2db4b3a2a05bd4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:42:17.346335 systemd[1]: var-lib-kubelet-pods-03cf6937\x2deb58\x2d4c53\x2db503\x2db4b3a2a05bd4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:42:17.346417 systemd[1]: var-lib-kubelet-pods-03cf6937\x2deb58\x2d4c53\x2db503\x2db4b3a2a05bd4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:42:17.347090 kubelet[1900]: I1101 00:42:17.347063 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-cgroup\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.347193 kubelet[1900]: I1101 00:42:17.347178 1900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-etc-cni-netd\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.347591 kubelet[1900]: I1101 00:42:17.347566 1900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-lib-modules\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.347917 kubelet[1900]: I1101 00:42:17.347893 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-cilium-config-path\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.348456 kubelet[1900]: I1101 00:42:17.348434 1900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03cf6937-eb58-4c53-b503-b4b3a2a05bd4-bpf-maps\") on node \"ci-3510.3.8-n-39b63463e5\" DevicePath \"\"" Nov 1 00:42:17.422977 systemd[1]: Removed slice kubepods-burstable-pod03cf6937_eb58_4c53_b503_b4b3a2a05bd4.slice. Nov 1 00:42:17.667644 kubelet[1900]: I1101 00:42:17.666348 1900 setters.go:543] "Node became not ready" node="ci-3510.3.8-n-39b63463e5" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:42:17Z","lastTransitionTime":"2025-11-01T00:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:42:18.003344 kubelet[1900]: I1101 00:42:18.003301 1900 scope.go:117] "RemoveContainer" containerID="e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd" Nov 1 00:42:18.006267 env[1194]: time="2025-11-01T00:42:18.005727273Z" level=info msg="RemoveContainer for \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\"" Nov 1 00:42:18.011586 env[1194]: time="2025-11-01T00:42:18.011525186Z" level=info msg="RemoveContainer for \"e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd\" returns successfully" Nov 1 00:42:18.105141 systemd[1]: Created slice kubepods-burstable-pod736972da_132d_4bba_b0e1_cb9594ea1e2f.slice. Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255513 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-bpf-maps\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255573 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-hostproc\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255595 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-cilium-cgroup\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255612 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-etc-cni-netd\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255629 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgqrr\" (UniqueName: \"kubernetes.io/projected/736972da-132d-4bba-b0e1-cb9594ea1e2f-kube-api-access-lgqrr\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.255679 kubelet[1900]: I1101 00:42:18.255647 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/736972da-132d-4bba-b0e1-cb9594ea1e2f-clustermesh-secrets\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255662 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/736972da-132d-4bba-b0e1-cb9594ea1e2f-cilium-config-path\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255678 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-cilium-run\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255691 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-cni-path\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255704 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-lib-modules\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255718 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-host-proc-sys-kernel\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256203 kubelet[1900]: I1101 00:42:18.255736 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/736972da-132d-4bba-b0e1-cb9594ea1e2f-cilium-ipsec-secrets\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256513 kubelet[1900]: I1101 00:42:18.255750 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-host-proc-sys-net\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256513 kubelet[1900]: I1101 00:42:18.255764 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/736972da-132d-4bba-b0e1-cb9594ea1e2f-xtables-lock\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.256513 kubelet[1900]: I1101 00:42:18.255780 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/736972da-132d-4bba-b0e1-cb9594ea1e2f-hubble-tls\") pod \"cilium-9jhjv\" (UID: \"736972da-132d-4bba-b0e1-cb9594ea1e2f\") " pod="kube-system/cilium-9jhjv" Nov 1 00:42:18.420941 kubelet[1900]: E1101 00:42:18.420892 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:18.422026 env[1194]: time="2025-11-01T00:42:18.421509533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jhjv,Uid:736972da-132d-4bba-b0e1-cb9594ea1e2f,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:18.441214 env[1194]: time="2025-11-01T00:42:18.441088281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:18.441214 env[1194]: time="2025-11-01T00:42:18.441152220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:18.441491 env[1194]: time="2025-11-01T00:42:18.441164148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:18.442019 env[1194]: time="2025-11-01T00:42:18.441499429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0 pid=3827 runtime=io.containerd.runc.v2 Nov 1 00:42:18.457819 systemd[1]: Started cri-containerd-e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0.scope. Nov 1 00:42:18.500858 env[1194]: time="2025-11-01T00:42:18.500794908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jhjv,Uid:736972da-132d-4bba-b0e1-cb9594ea1e2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\"" Nov 1 00:42:18.502478 kubelet[1900]: E1101 00:42:18.501760 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:18.508410 env[1194]: time="2025-11-01T00:42:18.507065538Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:42:18.518479 env[1194]: time="2025-11-01T00:42:18.518412203Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82\"" Nov 1 00:42:18.520929 env[1194]: time="2025-11-01T00:42:18.520880907Z" level=info msg="StartContainer for \"b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82\"" Nov 1 00:42:18.550122 systemd[1]: Started cri-containerd-b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82.scope. Nov 1 00:42:18.595055 env[1194]: time="2025-11-01T00:42:18.594788800Z" level=info msg="StartContainer for \"b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82\" returns successfully" Nov 1 00:42:18.611394 systemd[1]: cri-containerd-b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82.scope: Deactivated successfully. Nov 1 00:42:18.641183 env[1194]: time="2025-11-01T00:42:18.641117182Z" level=info msg="shim disconnected" id=b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82 Nov 1 00:42:18.641653 env[1194]: time="2025-11-01T00:42:18.641622967Z" level=warning msg="cleaning up after shim disconnected" id=b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82 namespace=k8s.io Nov 1 00:42:18.641789 env[1194]: time="2025-11-01T00:42:18.641769327Z" level=info msg="cleaning up dead shim" Nov 1 00:42:18.657278 env[1194]: time="2025-11-01T00:42:18.657202394Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3911 runtime=io.containerd.runc.v2\n" Nov 1 00:42:19.009476 kubelet[1900]: E1101 00:42:19.009428 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:19.015107 env[1194]: time="2025-11-01T00:42:19.015021957Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:42:19.026198 env[1194]: time="2025-11-01T00:42:19.026137639Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc\"" Nov 1 00:42:19.027321 env[1194]: time="2025-11-01T00:42:19.027275779Z" level=info msg="StartContainer for \"40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc\"" Nov 1 00:42:19.056338 systemd[1]: Started cri-containerd-40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc.scope. Nov 1 00:42:19.094609 env[1194]: time="2025-11-01T00:42:19.094545443Z" level=info msg="StartContainer for \"40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc\" returns successfully" Nov 1 00:42:19.109058 systemd[1]: cri-containerd-40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc.scope: Deactivated successfully. Nov 1 00:42:19.136035 env[1194]: time="2025-11-01T00:42:19.135954171Z" level=info msg="shim disconnected" id=40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc Nov 1 00:42:19.136035 env[1194]: time="2025-11-01T00:42:19.136017397Z" level=warning msg="cleaning up after shim disconnected" id=40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc namespace=k8s.io Nov 1 00:42:19.136035 env[1194]: time="2025-11-01T00:42:19.136027920Z" level=info msg="cleaning up dead shim" Nov 1 00:42:19.149299 env[1194]: time="2025-11-01T00:42:19.149232690Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3973 runtime=io.containerd.runc.v2\n" Nov 1 00:42:19.420057 kubelet[1900]: I1101 00:42:19.419870 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cf6937-eb58-4c53-b503-b4b3a2a05bd4" path="/var/lib/kubelet/pods/03cf6937-eb58-4c53-b503-b4b3a2a05bd4/volumes" Nov 1 00:42:19.784866 kubelet[1900]: W1101 00:42:19.784820 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cf6937_eb58_4c53_b503_b4b3a2a05bd4.slice/cri-containerd-e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd.scope WatchSource:0}: container "e6814f7d012bd7f6eacd73f6885cdd925bb3f8f3f81be942a910ee045524f9bd" in namespace "k8s.io": not found Nov 1 00:42:20.016020 kubelet[1900]: E1101 00:42:20.014399 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:20.021106 env[1194]: time="2025-11-01T00:42:20.021054902Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:42:20.045977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061289948.mount: Deactivated successfully. Nov 1 00:42:20.053799 env[1194]: time="2025-11-01T00:42:20.053724422Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c\"" Nov 1 00:42:20.054711 env[1194]: time="2025-11-01T00:42:20.054679192Z" level=info msg="StartContainer for \"8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c\"" Nov 1 00:42:20.093833 systemd[1]: Started cri-containerd-8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c.scope. Nov 1 00:42:20.143253 env[1194]: time="2025-11-01T00:42:20.142712399Z" level=info msg="StartContainer for \"8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c\" returns successfully" Nov 1 00:42:20.147636 systemd[1]: cri-containerd-8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c.scope: Deactivated successfully. Nov 1 00:42:20.185252 env[1194]: time="2025-11-01T00:42:20.185190855Z" level=info msg="shim disconnected" id=8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c Nov 1 00:42:20.185585 env[1194]: time="2025-11-01T00:42:20.185560848Z" level=warning msg="cleaning up after shim disconnected" id=8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c namespace=k8s.io Nov 1 00:42:20.185667 env[1194]: time="2025-11-01T00:42:20.185652115Z" level=info msg="cleaning up dead shim" Nov 1 00:42:20.205980 env[1194]: time="2025-11-01T00:42:20.205908030Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4032 runtime=io.containerd.runc.v2\n" Nov 1 00:42:20.367945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c-rootfs.mount: Deactivated successfully. Nov 1 00:42:20.647156 kubelet[1900]: E1101 00:42:20.646974 1900 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:42:21.019098 kubelet[1900]: E1101 00:42:21.019061 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:21.029088 env[1194]: time="2025-11-01T00:42:21.029029027Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:42:21.049248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085721035.mount: Deactivated successfully. Nov 1 00:42:21.058122 env[1194]: time="2025-11-01T00:42:21.058060009Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881\"" Nov 1 00:42:21.059622 env[1194]: time="2025-11-01T00:42:21.059570074Z" level=info msg="StartContainer for \"048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881\"" Nov 1 00:42:21.086501 systemd[1]: Started cri-containerd-048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881.scope. Nov 1 00:42:21.132681 systemd[1]: cri-containerd-048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881.scope: Deactivated successfully. Nov 1 00:42:21.133797 env[1194]: time="2025-11-01T00:42:21.133279311Z" level=info msg="StartContainer for \"048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881\" returns successfully" Nov 1 00:42:21.159838 env[1194]: time="2025-11-01T00:42:21.159772396Z" level=info msg="shim disconnected" id=048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881 Nov 1 00:42:21.159838 env[1194]: time="2025-11-01T00:42:21.159819503Z" level=warning msg="cleaning up after shim disconnected" id=048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881 namespace=k8s.io Nov 1 00:42:21.159838 env[1194]: time="2025-11-01T00:42:21.159831632Z" level=info msg="cleaning up dead shim" Nov 1 00:42:21.170515 env[1194]: time="2025-11-01T00:42:21.170446080Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Nov 1 00:42:21.367429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881-rootfs.mount: Deactivated successfully. Nov 1 00:42:22.026327 kubelet[1900]: E1101 00:42:22.026280 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:22.032786 env[1194]: time="2025-11-01T00:42:22.032741272Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:42:22.048422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12969054.mount: Deactivated successfully. Nov 1 00:42:22.057750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945329702.mount: Deactivated successfully. Nov 1 00:42:22.063424 env[1194]: time="2025-11-01T00:42:22.063356601Z" level=info msg="CreateContainer within sandbox \"e635fa7da482c71fef7912dae94c31726fad76adcb7442cc93cf9d49bbf28fa0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"10549339d135204a563890beb94e0c82a1e4d64ff92b83007127f209d6323968\"" Nov 1 00:42:22.064156 env[1194]: time="2025-11-01T00:42:22.064127971Z" level=info msg="StartContainer for \"10549339d135204a563890beb94e0c82a1e4d64ff92b83007127f209d6323968\"" Nov 1 00:42:22.108679 systemd[1]: Started cri-containerd-10549339d135204a563890beb94e0c82a1e4d64ff92b83007127f209d6323968.scope. Nov 1 00:42:22.152275 env[1194]: time="2025-11-01T00:42:22.152210038Z" level=info msg="StartContainer for \"10549339d135204a563890beb94e0c82a1e4d64ff92b83007127f209d6323968\" returns successfully" Nov 1 00:42:22.641087 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:42:22.897357 kubelet[1900]: W1101 00:42:22.897103 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736972da_132d_4bba_b0e1_cb9594ea1e2f.slice/cri-containerd-b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82.scope WatchSource:0}: task b9e16a50dba3c1f8cd8c1e7dbe786b9e1a34014976f46b69df80cf67e960ff82 not found Nov 1 00:42:23.045791 kubelet[1900]: E1101 00:42:23.045752 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:24.420573 kubelet[1900]: E1101 00:42:24.420365 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:25.397866 env[1194]: time="2025-11-01T00:42:25.397805224Z" level=info msg="StopPodSandbox for \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\"" Nov 1 00:42:25.399863 env[1194]: time="2025-11-01T00:42:25.399769194Z" level=info msg="TearDown network for sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" successfully" Nov 1 00:42:25.400134 env[1194]: time="2025-11-01T00:42:25.400099081Z" level=info msg="StopPodSandbox for \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" returns successfully" Nov 1 00:42:25.400906 env[1194]: time="2025-11-01T00:42:25.400854359Z" level=info msg="RemovePodSandbox for \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\"" Nov 1 00:42:25.401019 env[1194]: time="2025-11-01T00:42:25.400899006Z" level=info msg="Forcibly stopping sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\"" Nov 1 00:42:25.401080 env[1194]: time="2025-11-01T00:42:25.401051105Z" level=info msg="TearDown network for sandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" successfully" Nov 1 00:42:25.406767 env[1194]: time="2025-11-01T00:42:25.406679846Z" level=info msg="RemovePodSandbox \"988914a43f3396ecabaa052ccbc5cdf4a3921497da371e51c32dccb32e3c5020\" returns successfully" Nov 1 00:42:25.407508 env[1194]: time="2025-11-01T00:42:25.407477680Z" level=info msg="StopPodSandbox for \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\"" Nov 1 00:42:25.407766 env[1194]: time="2025-11-01T00:42:25.407727978Z" level=info msg="TearDown network for sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" successfully" Nov 1 00:42:25.407926 env[1194]: time="2025-11-01T00:42:25.407907987Z" level=info msg="StopPodSandbox for \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" returns successfully" Nov 1 00:42:25.408395 env[1194]: time="2025-11-01T00:42:25.408369965Z" level=info msg="RemovePodSandbox for \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\"" Nov 1 00:42:25.408530 env[1194]: time="2025-11-01T00:42:25.408493478Z" level=info msg="Forcibly stopping sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\"" Nov 1 00:42:25.408675 env[1194]: time="2025-11-01T00:42:25.408654984Z" level=info msg="TearDown network for sandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" successfully" Nov 1 00:42:25.411528 env[1194]: time="2025-11-01T00:42:25.411477065Z" level=info msg="RemovePodSandbox \"a33133fbe232abc207752a2212aa146f9dcc616a13730ec75a840930aac359e9\" returns successfully" Nov 1 00:42:25.412194 env[1194]: time="2025-11-01T00:42:25.412163895Z" level=info msg="StopPodSandbox for \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\"" Nov 1 00:42:25.412410 env[1194]: time="2025-11-01T00:42:25.412370376Z" level=info msg="TearDown network for sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" successfully" Nov 1 00:42:25.412517 env[1194]: time="2025-11-01T00:42:25.412500339Z" level=info msg="StopPodSandbox for \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" returns successfully" Nov 1 00:42:25.413028 env[1194]: time="2025-11-01T00:42:25.412953495Z" level=info msg="RemovePodSandbox for \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\"" Nov 1 00:42:25.413214 env[1194]: time="2025-11-01T00:42:25.413144729Z" level=info msg="Forcibly stopping sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\"" Nov 1 00:42:25.413352 env[1194]: time="2025-11-01T00:42:25.413334338Z" level=info msg="TearDown network for sandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" successfully" Nov 1 00:42:25.416096 env[1194]: time="2025-11-01T00:42:25.416054479Z" level=info msg="RemovePodSandbox \"b0ef73fc6a886b505621025e9b79e225e0ac3576688a28098345f390ccad3961\" returns successfully" Nov 1 00:42:25.763876 systemd-networkd[1005]: lxc_health: Link UP Nov 1 00:42:25.802800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:42:25.802155 systemd-networkd[1005]: lxc_health: Gained carrier Nov 1 00:42:26.007713 kubelet[1900]: W1101 00:42:26.007660 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736972da_132d_4bba_b0e1_cb9594ea1e2f.slice/cri-containerd-40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc.scope WatchSource:0}: task 40c246131dffba423157e4dec038873f0c8ec972a821a69c686814fc6dc1ecdc not found Nov 1 00:42:26.422973 kubelet[1900]: E1101 00:42:26.422927 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:26.453638 kubelet[1900]: I1101 00:42:26.453571 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9jhjv" podStartSLOduration=8.453550088 podStartE2EDuration="8.453550088s" podCreationTimestamp="2025-11-01 00:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:23.095163161 +0000 UTC m=+118.010087704" watchObservedRunningTime="2025-11-01 00:42:26.453550088 +0000 UTC m=+121.368474630" Nov 1 00:42:27.059179 kubelet[1900]: E1101 00:42:27.059133 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:27.236957 systemd[1]: run-containerd-runc-k8s.io-10549339d135204a563890beb94e0c82a1e4d64ff92b83007127f209d6323968-runc.Xq1jBi.mount: Deactivated successfully. Nov 1 00:42:27.769328 systemd-networkd[1005]: lxc_health: Gained IPv6LL Nov 1 00:42:28.061615 kubelet[1900]: E1101 00:42:28.061449 1900 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:42:29.120713 kubelet[1900]: W1101 00:42:29.120654 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736972da_132d_4bba_b0e1_cb9594ea1e2f.slice/cri-containerd-8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c.scope WatchSource:0}: task 8428c255be4a1ba3cd4ae9be645c8bf3a9566e8b8e85ce941133c67e61dbee0c not found Nov 1 00:42:31.705363 sshd[3718]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:31.709192 systemd[1]: sshd@26-143.198.72.73:22-139.178.89.65:38266.service: Deactivated successfully. Nov 1 00:42:31.710373 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:42:31.711191 systemd-logind[1184]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:42:31.712820 systemd-logind[1184]: Removed session 27. Nov 1 00:42:32.228970 kubelet[1900]: W1101 00:42:32.228911 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736972da_132d_4bba_b0e1_cb9594ea1e2f.slice/cri-containerd-048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881.scope WatchSource:0}: task 048c4135f59dad01d0c09f838878081d6d56e73516c56ca75557ea53a06c4881 not found