Nov 1 00:41:58.906736 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:41:58.906769 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:58.906783 kernel: BIOS-provided physical RAM map: Nov 1 00:41:58.906789 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:41:58.906795 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:41:58.906801 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:41:58.906809 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:41:58.906816 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:41:58.906824 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:41:58.906831 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:41:58.906837 kernel: NX (Execute Disable) protection: active Nov 1 00:41:58.906844 kernel: SMBIOS 2.8 present. Nov 1 00:41:58.906850 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:41:58.906856 kernel: Hypervisor detected: KVM Nov 1 00:41:58.906865 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:41:58.906874 kernel: kvm-clock: cpu 0, msr 1b1a0001, primary cpu clock Nov 1 00:41:58.906881 kernel: kvm-clock: using sched offset of 3487772428 cycles Nov 1 00:41:58.906889 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:41:58.906899 kernel: tsc: Detected 2494.140 MHz processor Nov 1 00:41:58.906907 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:41:58.906914 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:41:58.906921 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:41:58.906928 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:41:58.906938 kernel: ACPI: Early table checksum verification disabled Nov 1 00:41:58.906945 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:41:58.906952 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.906960 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.906966 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.906973 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:41:58.906980 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.906987 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.906994 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.907004 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:41:58.907011 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:41:58.907018 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:41:58.907025 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:41:58.907032 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:41:58.907039 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:41:58.907046 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:41:58.907053 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:41:58.907067 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:41:58.915157 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:41:58.915177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:41:58.915192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:41:58.915206 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:41:58.915220 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:41:58.915242 kernel: Zone ranges: Nov 1 00:41:58.915255 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:41:58.915269 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:41:58.915282 kernel: Normal empty Nov 1 00:41:58.915295 kernel: Movable zone start for each node Nov 1 00:41:58.915308 kernel: Early memory node ranges Nov 1 00:41:58.915322 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:41:58.915335 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:41:58.915349 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:41:58.915365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:41:58.915386 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:41:58.915399 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:41:58.915413 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:41:58.915426 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:41:58.915440 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:41:58.915454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:41:58.915467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:41:58.915480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:41:58.915496 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:41:58.915527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:41:58.915538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:41:58.915550 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:41:58.915562 kernel: TSC deadline timer available Nov 1 00:41:58.915576 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:41:58.915589 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:41:58.915602 kernel: Booting paravirtualized kernel on KVM Nov 1 00:41:58.915615 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:41:58.915632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:41:58.915646 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:41:58.915660 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:41:58.915674 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:41:58.915688 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Nov 1 00:41:58.915702 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:41:58.915715 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:41:58.915729 kernel: Policy zone: DMA32 Nov 1 00:41:58.915744 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:58.915761 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:41:58.915775 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:41:58.915789 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:41:58.915800 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:41:58.915813 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 123076K reserved, 0K cma-reserved) Nov 1 00:41:58.915824 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:41:58.915835 kernel: Kernel/User page tables isolation: enabled Nov 1 00:41:58.915846 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:41:58.915860 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:41:58.915872 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:41:58.915884 kernel: rcu: RCU event tracing is enabled. Nov 1 00:41:58.915895 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:41:58.915907 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:41:58.915918 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:41:58.915929 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:41:58.915941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:41:58.915952 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:41:58.915967 kernel: random: crng init done Nov 1 00:41:58.915979 kernel: Console: colour VGA+ 80x25 Nov 1 00:41:58.915990 kernel: printk: console [tty0] enabled Nov 1 00:41:58.916001 kernel: printk: console [ttyS0] enabled Nov 1 00:41:58.916012 kernel: ACPI: Core revision 20210730 Nov 1 00:41:58.916023 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:41:58.916035 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:41:58.916046 kernel: x2apic enabled Nov 1 00:41:58.916058 kernel: Switched APIC routing to physical x2apic. Nov 1 00:41:58.916087 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:41:58.916104 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 1 00:41:58.916117 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 1 00:41:58.916138 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:41:58.916147 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:41:58.916155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:41:58.916163 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:41:58.916171 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:41:58.916179 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:41:58.916190 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:41:58.916207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:41:58.916217 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:41:58.916233 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:41:58.916244 kernel: active return thunk: its_return_thunk Nov 1 00:41:58.916255 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:41:58.916267 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:41:58.916279 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:41:58.916290 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:41:58.916303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:41:58.916318 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:41:58.916330 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:41:58.916342 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:41:58.916354 kernel: LSM: Security Framework initializing Nov 1 00:41:58.916366 kernel: SELinux: Initializing. Nov 1 00:41:58.916377 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:41:58.916390 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:41:58.916406 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:41:58.916418 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:41:58.916430 kernel: signal: max sigframe size: 1776 Nov 1 00:41:58.916443 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:41:58.916455 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:41:58.916468 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:41:58.916481 kernel: x86: Booting SMP configuration: Nov 1 00:41:58.916493 kernel: .... node #0, CPUs: #1 Nov 1 00:41:58.916505 kernel: kvm-clock: cpu 1, msr 1b1a0041, secondary cpu clock Nov 1 00:41:58.916520 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Nov 1 00:41:58.916533 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:41:58.916544 kernel: smpboot: Max logical packages: 1 Nov 1 00:41:58.916556 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 1 00:41:58.916567 kernel: devtmpfs: initialized Nov 1 00:41:58.916579 kernel: x86/mm: Memory block size: 128MB Nov 1 00:41:58.916592 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:41:58.916604 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:41:58.916616 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:41:58.916632 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:41:58.916645 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:41:58.916658 kernel: audit: type=2000 audit(1761957717.941:1): state=initialized audit_enabled=0 res=1 Nov 1 00:41:58.916670 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:41:58.916684 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:41:58.916696 kernel: cpuidle: using governor menu Nov 1 00:41:58.916714 kernel: ACPI: bus type PCI registered Nov 1 00:41:58.916727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:41:58.916739 kernel: dca service started, version 1.12.1 Nov 1 00:41:58.916755 kernel: PCI: Using configuration type 1 for base access Nov 1 00:41:58.916768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:41:58.916780 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:41:58.916793 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:41:58.916805 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:41:58.916818 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:41:58.916830 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:41:58.916844 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:41:58.916857 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:41:58.916875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:41:58.916889 kernel: ACPI: Interpreter enabled Nov 1 00:41:58.916900 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:41:58.916913 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:41:58.916926 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:41:58.916937 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:41:58.916948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:41:58.917243 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:41:58.917403 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:41:58.917424 kernel: acpiphp: Slot [3] registered Nov 1 00:41:58.917437 kernel: acpiphp: Slot [4] registered Nov 1 00:41:58.917449 kernel: acpiphp: Slot [5] registered Nov 1 00:41:58.917461 kernel: acpiphp: Slot [6] registered Nov 1 00:41:58.917475 kernel: acpiphp: Slot [7] registered Nov 1 00:41:58.917487 kernel: acpiphp: Slot [8] registered Nov 1 00:41:58.917499 kernel: acpiphp: Slot [9] registered Nov 1 00:41:58.917511 kernel: acpiphp: Slot [10] registered Nov 1 00:41:58.917528 kernel: acpiphp: Slot [11] registered Nov 1 00:41:58.917540 kernel: acpiphp: Slot [12] registered Nov 1 00:41:58.917551 kernel: acpiphp: Slot [13] registered Nov 1 00:41:58.917563 kernel: acpiphp: Slot [14] registered Nov 1 00:41:58.917575 kernel: acpiphp: Slot [15] registered Nov 1 00:41:58.917588 kernel: acpiphp: Slot [16] registered Nov 1 00:41:58.917599 kernel: acpiphp: Slot [17] registered Nov 1 00:41:58.917611 kernel: acpiphp: Slot [18] registered Nov 1 00:41:58.917623 kernel: acpiphp: Slot [19] registered Nov 1 00:41:58.917639 kernel: acpiphp: Slot [20] registered Nov 1 00:41:58.917653 kernel: acpiphp: Slot [21] registered Nov 1 00:41:58.917665 kernel: acpiphp: Slot [22] registered Nov 1 00:41:58.917677 kernel: acpiphp: Slot [23] registered Nov 1 00:41:58.917689 kernel: acpiphp: Slot [24] registered Nov 1 00:41:58.917701 kernel: acpiphp: Slot [25] registered Nov 1 00:41:58.917713 kernel: acpiphp: Slot [26] registered Nov 1 00:41:58.917726 kernel: acpiphp: Slot [27] registered Nov 1 00:41:58.917738 kernel: acpiphp: Slot [28] registered Nov 1 00:41:58.917750 kernel: acpiphp: Slot [29] registered Nov 1 00:41:58.917765 kernel: acpiphp: Slot [30] registered Nov 1 00:41:58.917778 kernel: acpiphp: Slot [31] registered Nov 1 00:41:58.917790 kernel: PCI host bridge to bus 0000:00 Nov 1 00:41:58.917976 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:41:58.918091 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:41:58.918203 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:41:58.918297 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:41:58.918383 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:41:58.918460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:41:58.918576 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:41:58.918684 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:41:58.918833 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:41:58.918926 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:41:58.919023 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:41:58.919144 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:41:58.919270 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:41:58.919392 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:41:58.919500 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:41:58.919605 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:41:58.919730 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:41:58.919858 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:41:58.919984 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:41:58.923190 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:41:58.923325 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:41:58.923426 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:41:58.923532 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:41:58.923629 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:41:58.923725 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:41:58.923830 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:41:58.923924 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:41:58.924056 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:41:58.924283 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:41:58.924406 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:41:58.924500 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:41:58.924586 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:41:58.924695 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:41:58.924828 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:41:58.924918 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:41:58.925003 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:41:58.925102 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:41:58.925207 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:41:58.925348 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:41:58.925455 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:41:58.925543 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:41:58.925646 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:41:58.925735 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:41:58.925876 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:41:58.929214 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:41:58.929363 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:41:58.929475 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:41:58.929570 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:41:58.929581 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:41:58.929590 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:41:58.929599 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:41:58.929612 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:41:58.929621 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:41:58.929630 kernel: iommu: Default domain type: Translated Nov 1 00:41:58.929639 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:41:58.929739 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:41:58.929830 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:41:58.929919 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:41:58.929931 kernel: vgaarb: loaded Nov 1 00:41:58.929940 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:41:58.929952 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:41:58.929960 kernel: PTP clock support registered Nov 1 00:41:58.929969 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:41:58.929977 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:41:58.929985 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:41:58.929994 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:41:58.930002 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:41:58.930010 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:41:58.930019 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:41:58.930029 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:41:58.930038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:41:58.930046 kernel: pnp: PnP ACPI init Nov 1 00:41:58.930055 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:41:58.930063 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:41:58.930083 kernel: NET: Registered PF_INET protocol family Nov 1 00:41:58.930091 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:41:58.930100 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:41:58.930111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:41:58.930119 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:41:58.930128 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:41:58.930136 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:41:58.930144 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:41:58.930153 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:41:58.930161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:41:58.930169 kernel: NET: Registered PF_XDP protocol family Nov 1 00:41:58.930261 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:41:58.930346 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:41:58.930425 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:41:58.930503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:41:58.930582 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:41:58.930673 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:41:58.930764 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:41:58.930851 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:41:58.930862 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:41:58.930953 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 29774 usecs Nov 1 00:41:58.930964 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:41:58.930973 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:41:58.930982 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 1 00:41:58.930991 kernel: Initialise system trusted keyrings Nov 1 00:41:58.930999 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:41:58.931007 kernel: Key type asymmetric registered Nov 1 00:41:58.931016 kernel: Asymmetric key parser 'x509' registered Nov 1 00:41:58.931024 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:41:58.931035 kernel: io scheduler mq-deadline registered Nov 1 00:41:58.931043 kernel: io scheduler kyber registered Nov 1 00:41:58.931052 kernel: io scheduler bfq registered Nov 1 00:41:58.931060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:41:58.931068 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:41:58.931084 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:41:58.931093 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:41:58.931101 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:41:58.931109 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:41:58.931120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:41:58.931129 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:41:58.931137 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:41:58.931145 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:41:58.931265 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:41:58.931348 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:41:58.931429 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:41:58 UTC (1761957718) Nov 1 00:41:58.931525 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:41:58.931545 kernel: intel_pstate: CPU model not supported Nov 1 00:41:58.931557 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:41:58.931568 kernel: Segment Routing with IPv6 Nov 1 00:41:58.931576 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:41:58.931585 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:41:58.931593 kernel: Key type dns_resolver registered Nov 1 00:41:58.931601 kernel: IPI shorthand broadcast: enabled Nov 1 00:41:58.931610 kernel: sched_clock: Marking stable (668069301, 135090491)->(941902118, -138742326) Nov 1 00:41:58.931618 kernel: registered taskstats version 1 Nov 1 00:41:58.931629 kernel: Loading compiled-in X.509 certificates Nov 1 00:41:58.931638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:41:58.931646 kernel: Key type .fscrypt registered Nov 1 00:41:58.931654 kernel: Key type fscrypt-provisioning registered Nov 1 00:41:58.931663 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:41:58.931672 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:41:58.931680 kernel: ima: No architecture policies found Nov 1 00:41:58.931694 kernel: clk: Disabling unused clocks Nov 1 00:41:58.931711 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:41:58.931721 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:41:58.931732 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:41:58.931743 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:41:58.931756 kernel: Run /init as init process Nov 1 00:41:58.931768 kernel: with arguments: Nov 1 00:41:58.931835 kernel: /init Nov 1 00:41:58.931852 kernel: with environment: Nov 1 00:41:58.931864 kernel: HOME=/ Nov 1 00:41:58.931872 kernel: TERM=linux Nov 1 00:41:58.931884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:41:58.931897 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:41:58.931914 systemd[1]: Detected virtualization kvm. Nov 1 00:41:58.931929 systemd[1]: Detected architecture x86-64. Nov 1 00:41:58.931941 systemd[1]: Running in initrd. Nov 1 00:41:58.931955 systemd[1]: No hostname configured, using default hostname. Nov 1 00:41:58.931968 systemd[1]: Hostname set to . Nov 1 00:41:58.931986 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:41:58.931995 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:41:58.932004 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:41:58.932016 systemd[1]: Reached target cryptsetup.target. Nov 1 00:41:58.932031 systemd[1]: Reached target paths.target. Nov 1 00:41:58.932047 systemd[1]: Reached target slices.target. Nov 1 00:41:58.932061 systemd[1]: Reached target swap.target. Nov 1 00:41:58.935146 systemd[1]: Reached target timers.target. Nov 1 00:41:58.935166 systemd[1]: Listening on iscsid.socket. Nov 1 00:41:58.935176 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:41:58.935186 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:41:58.935196 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:41:58.935205 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:41:58.935214 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:41:58.935224 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:41:58.935233 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:41:58.935245 systemd[1]: Reached target sockets.target. Nov 1 00:41:58.935257 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:41:58.935269 systemd[1]: Finished network-cleanup.service. Nov 1 00:41:58.935279 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:41:58.935289 systemd[1]: Starting systemd-journald.service... Nov 1 00:41:58.935298 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:41:58.935310 systemd[1]: Starting systemd-resolved.service... Nov 1 00:41:58.935320 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:41:58.935329 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:41:58.935339 kernel: audit: type=1130 audit(1761957718.912:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.935350 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:41:58.935365 systemd-journald[184]: Journal started Nov 1 00:41:58.935436 systemd-journald[184]: Runtime Journal (/run/log/journal/5cac1e69b18645fc9437a4a2a35142a1) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:41:58.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.929249 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:41:59.000857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:41:59.000884 kernel: Bridge firewalling registered Nov 1 00:41:59.000897 systemd[1]: Started systemd-journald.service. Nov 1 00:41:59.000913 kernel: audit: type=1130 audit(1761957718.986:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.000935 kernel: audit: type=1130 audit(1761957718.992:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.000946 kernel: SCSI subsystem initialized Nov 1 00:41:58.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.940054 systemd-resolved[186]: Positive Trust Anchors: Nov 1 00:41:59.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.940063 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:41:59.012974 kernel: audit: type=1130 audit(1761957719.001:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.013002 kernel: audit: type=1130 audit(1761957719.005:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:58.940134 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:41:59.022371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:41:59.022398 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:41:59.022411 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:41:58.943083 systemd-resolved[186]: Defaulting to hostname 'linux'. Nov 1 00:41:58.974139 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:41:58.992832 systemd[1]: Started systemd-resolved.service. Nov 1 00:41:59.001939 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:41:59.005866 systemd[1]: Reached target nss-lookup.target. Nov 1 00:41:59.007320 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:41:59.012942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:41:59.033158 kernel: audit: type=1130 audit(1761957719.027:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.026253 systemd-modules-load[185]: Inserted module 'dm_multipath' Nov 1 00:41:59.026995 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:41:59.028350 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:41:59.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.034849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:41:59.039661 kernel: audit: type=1130 audit(1761957719.035:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.042784 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:41:59.053961 kernel: audit: type=1130 audit(1761957719.043:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.054832 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:41:59.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.056458 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:41:59.061857 kernel: audit: type=1130 audit(1761957719.055:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.069899 dracut-cmdline[207]: dracut-dracut-053 Nov 1 00:41:59.072687 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:41:59.161135 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:41:59.180116 kernel: iscsi: registered transport (tcp) Nov 1 00:41:59.206503 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:41:59.206597 kernel: QLogic iSCSI HBA Driver Nov 1 00:41:59.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.248904 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:41:59.250544 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:41:59.308167 kernel: raid6: avx2x4 gen() 23181 MB/s Nov 1 00:41:59.325140 kernel: raid6: avx2x4 xor() 6278 MB/s Nov 1 00:41:59.342138 kernel: raid6: avx2x2 gen() 24060 MB/s Nov 1 00:41:59.360152 kernel: raid6: avx2x2 xor() 21019 MB/s Nov 1 00:41:59.377156 kernel: raid6: avx2x1 gen() 19828 MB/s Nov 1 00:41:59.394144 kernel: raid6: avx2x1 xor() 17161 MB/s Nov 1 00:41:59.411154 kernel: raid6: sse2x4 gen() 11110 MB/s Nov 1 00:41:59.428151 kernel: raid6: sse2x4 xor() 6249 MB/s Nov 1 00:41:59.445159 kernel: raid6: sse2x2 gen() 12276 MB/s Nov 1 00:41:59.462155 kernel: raid6: sse2x2 xor() 8255 MB/s Nov 1 00:41:59.479151 kernel: raid6: sse2x1 gen() 10831 MB/s Nov 1 00:41:59.497063 kernel: raid6: sse2x1 xor() 5689 MB/s Nov 1 00:41:59.497217 kernel: raid6: using algorithm avx2x2 gen() 24060 MB/s Nov 1 00:41:59.497240 kernel: raid6: .... xor() 21019 MB/s, rmw enabled Nov 1 00:41:59.498005 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:41:59.513117 kernel: xor: automatically using best checksumming function avx Nov 1 00:41:59.622116 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:41:59.634283 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:41:59.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.635000 audit: BPF prog-id=7 op=LOAD Nov 1 00:41:59.635000 audit: BPF prog-id=8 op=LOAD Nov 1 00:41:59.636044 systemd[1]: Starting systemd-udevd.service... Nov 1 00:41:59.651376 systemd-udevd[384]: Using default interface naming scheme 'v252'. Nov 1 00:41:59.657891 systemd[1]: Started systemd-udevd.service. Nov 1 00:41:59.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.662190 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:41:59.680364 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Nov 1 00:41:59.726818 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:41:59.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.729138 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:41:59.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:41:59.784973 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:41:59.858814 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:41:59.945353 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:41:59.945396 kernel: GPT:9289727 != 125829119 Nov 1 00:41:59.945415 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:41:59.945432 kernel: GPT:9289727 != 125829119 Nov 1 00:41:59.945450 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:41:59.945469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:41:59.945488 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:41:59.956742 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:41:59.956762 kernel: ACPI: bus type USB registered Nov 1 00:41:59.956784 kernel: usbcore: registered new interface driver usbfs Nov 1 00:41:59.956796 kernel: usbcore: registered new interface driver hub Nov 1 00:41:59.960123 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:41:59.965654 kernel: usbcore: registered new device driver usb Nov 1 00:41:59.978250 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:41:59.981108 kernel: AES CTR mode by8 optimization enabled Nov 1 00:41:59.986099 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 1 00:41:59.996102 kernel: ehci-pci: EHCI PCI platform driver Nov 1 00:42:00.003659 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:42:00.099584 kernel: libata version 3.00 loaded. Nov 1 00:42:00.099614 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (443) Nov 1 00:42:00.099627 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:42:00.099825 kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 1 00:42:00.099839 kernel: scsi host1: ata_piix Nov 1 00:42:00.099989 kernel: scsi host2: ata_piix Nov 1 00:42:00.100147 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:42:00.100161 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:42:00.100172 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:42:00.100309 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:42:00.100431 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:42:00.100553 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Nov 1 00:42:00.100655 kernel: hub 1-0:1.0: USB hub found Nov 1 00:42:00.100836 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:42:00.102625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:42:00.103223 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:42:00.109438 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:42:00.123181 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:00.125809 systemd[1]: Starting disk-uuid.service... Nov 1 00:42:00.132725 disk-uuid[504]: Primary Header is updated. Nov 1 00:42:00.132725 disk-uuid[504]: Secondary Entries is updated. Nov 1 00:42:00.132725 disk-uuid[504]: Secondary Header is updated. Nov 1 00:42:00.147111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:00.168448 kernel: GPT:disk_guids don't match. Nov 1 00:42:00.168569 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:42:00.168611 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:01.176115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:42:01.176201 disk-uuid[506]: The operation has completed successfully. Nov 1 00:42:01.228841 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:42:01.228955 systemd[1]: Finished disk-uuid.service. Nov 1 00:42:01.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.230862 systemd[1]: Starting verity-setup.service... Nov 1 00:42:01.253101 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:42:01.313976 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:42:01.318360 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:42:01.319699 systemd[1]: Finished verity-setup.service. Nov 1 00:42:01.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.420144 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:01.421977 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:42:01.422890 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:42:01.424184 systemd[1]: Starting ignition-setup.service... Nov 1 00:42:01.425735 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:42:01.445490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:01.445583 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:42:01.445600 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:42:01.464929 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:42:01.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.474883 systemd[1]: Finished ignition-setup.service. Nov 1 00:42:01.478253 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:42:01.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.622139 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:42:01.625000 audit: BPF prog-id=9 op=LOAD Nov 1 00:42:01.627696 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:01.642691 ignition[613]: Ignition 2.14.0 Nov 1 00:42:01.643927 ignition[613]: Stage: fetch-offline Nov 1 00:42:01.644043 ignition[613]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:01.645396 ignition[613]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:01.650309 ignition[613]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:01.651431 ignition[613]: parsed url from cmdline: "" Nov 1 00:42:01.651545 ignition[613]: no config URL provided Nov 1 00:42:01.652108 ignition[613]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:01.652809 ignition[613]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:01.653374 ignition[613]: failed to fetch config: resource requires networking Nov 1 00:42:01.654320 ignition[613]: Ignition finished successfully Nov 1 00:42:01.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.656128 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:42:01.667895 systemd-networkd[690]: lo: Link UP Nov 1 00:42:01.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.667916 systemd-networkd[690]: lo: Gained carrier Nov 1 00:42:01.669000 systemd-networkd[690]: Enumeration completed Nov 1 00:42:01.669600 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:01.669631 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:01.670322 systemd[1]: Reached target network.target. Nov 1 00:42:01.671111 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:42:01.672754 systemd-networkd[690]: eth1: Link UP Nov 1 00:42:01.672761 systemd-networkd[690]: eth1: Gained carrier Nov 1 00:42:01.673564 systemd[1]: Starting ignition-fetch.service... Nov 1 00:42:01.675392 systemd[1]: Starting iscsiuio.service... Nov 1 00:42:01.687979 systemd-networkd[690]: eth0: Link UP Nov 1 00:42:01.687985 systemd-networkd[690]: eth0: Gained carrier Nov 1 00:42:01.700957 ignition[692]: Ignition 2.14.0 Nov 1 00:42:01.700977 ignition[692]: Stage: fetch Nov 1 00:42:01.701451 ignition[692]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:01.706326 systemd-networkd[690]: eth0: DHCPv4 address 64.23.181.132/20, gateway 64.23.176.1 acquired from 169.254.169.253 Nov 1 00:42:01.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.701485 ignition[692]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:01.708235 systemd[1]: Started iscsiuio.service. Nov 1 00:42:01.710268 systemd[1]: Starting iscsid.service... Nov 1 00:42:01.712244 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Nov 1 00:42:01.714011 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:01.714309 ignition[692]: parsed url from cmdline: "" Nov 1 00:42:01.714316 ignition[692]: no config URL provided Nov 1 00:42:01.714328 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:01.714346 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:01.714397 ignition[692]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:42:01.720490 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:01.720490 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:42:01.720490 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:42:01.720490 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:42:01.720490 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:01.720490 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:42:01.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.724374 systemd[1]: Started iscsid.service. Nov 1 00:42:01.727836 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:42:01.748411 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:42:01.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.749352 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:42:01.750179 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:01.751165 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:01.753894 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:42:01.766595 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:42:01.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.769827 ignition[692]: GET result: OK Nov 1 00:42:01.770644 ignition[692]: parsing config with SHA512: 61091e197a0a9fce61a45e39f4343cc0ca9ddc22504c87b0f88246b33be0dbbcde5e4f0f03345357f92f344510d3cb1258afe5fd62e6fce5bb06ab37a15f4883 Nov 1 00:42:01.780770 unknown[692]: fetched base config from "system" Nov 1 00:42:01.781519 unknown[692]: fetched base config from "system" Nov 1 00:42:01.782115 unknown[692]: fetched user config from "digitalocean" Nov 1 00:42:01.783283 ignition[692]: fetch: fetch complete Nov 1 00:42:01.783904 ignition[692]: fetch: fetch passed Nov 1 00:42:01.784488 ignition[692]: Ignition finished successfully Nov 1 00:42:01.786533 systemd[1]: Finished ignition-fetch.service. Nov 1 00:42:01.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.789010 systemd[1]: Starting ignition-kargs.service... Nov 1 00:42:01.801951 ignition[715]: Ignition 2.14.0 Nov 1 00:42:01.803093 ignition[715]: Stage: kargs Nov 1 00:42:01.803941 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:01.804681 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:01.807273 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:01.810115 ignition[715]: kargs: kargs passed Nov 1 00:42:01.810950 ignition[715]: Ignition finished successfully Nov 1 00:42:01.812817 systemd[1]: Finished ignition-kargs.service. Nov 1 00:42:01.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.815120 systemd[1]: Starting ignition-disks.service... Nov 1 00:42:01.828560 ignition[720]: Ignition 2.14.0 Nov 1 00:42:01.828575 ignition[720]: Stage: disks Nov 1 00:42:01.828753 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:01.828791 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:01.831493 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:01.833970 ignition[720]: disks: disks passed Nov 1 00:42:01.835177 systemd[1]: Finished ignition-disks.service. Nov 1 00:42:01.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.834065 ignition[720]: Ignition finished successfully Nov 1 00:42:01.836588 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:42:01.837304 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:01.838339 systemd[1]: Reached target local-fs.target. Nov 1 00:42:01.839309 systemd[1]: Reached target sysinit.target. Nov 1 00:42:01.840370 systemd[1]: Reached target basic.target. Nov 1 00:42:01.842978 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:42:01.863229 systemd-fsck[728]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:42:01.867212 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:42:01.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:01.869049 systemd[1]: Mounting sysroot.mount... Nov 1 00:42:01.882101 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:42:01.882534 systemd[1]: Mounted sysroot.mount. Nov 1 00:42:01.883811 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:42:01.886593 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:42:01.889737 systemd[1]: Starting flatcar-digitalocean-network.service... Nov 1 00:42:01.893651 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:42:01.895651 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:42:01.896901 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:42:01.900032 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:42:01.904737 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:42:01.919088 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:42:01.933550 initrd-setup-root[748]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:42:01.948686 initrd-setup-root[758]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:42:01.963407 initrd-setup-root[768]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:42:02.029191 coreos-metadata[735]: Nov 01 00:42:02.029 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:42:02.045728 coreos-metadata[735]: Nov 01 00:42:02.045 INFO Fetch successful Nov 1 00:42:02.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.051740 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:42:02.053475 systemd[1]: Starting ignition-mount.service... Nov 1 00:42:02.056771 systemd[1]: Starting sysroot-boot.service... Nov 1 00:42:02.061893 coreos-metadata[735]: Nov 01 00:42:02.061 INFO wrote hostname ci-3510.3.8-n-14edb40b39 to /sysroot/etc/hostname Nov 1 00:42:02.064184 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:42:02.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.070382 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:42:02.079156 coreos-metadata[734]: Nov 01 00:42:02.079 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:42:02.086940 ignition[787]: INFO : Ignition 2.14.0 Nov 1 00:42:02.086940 ignition[787]: INFO : Stage: mount Nov 1 00:42:02.088297 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:02.088297 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:02.090053 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:02.093513 ignition[787]: INFO : mount: mount passed Nov 1 00:42:02.094081 ignition[787]: INFO : Ignition finished successfully Nov 1 00:42:02.094683 systemd[1]: Finished ignition-mount.service. Nov 1 00:42:02.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.095880 coreos-metadata[734]: Nov 01 00:42:02.095 INFO Fetch successful Nov 1 00:42:02.102805 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:42:02.102918 systemd[1]: Finished flatcar-digitalocean-network.service. Nov 1 00:42:02.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.105302 systemd[1]: Finished sysroot-boot.service. Nov 1 00:42:02.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:02.340260 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:42:02.350101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Nov 1 00:42:02.353115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:02.353210 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:42:02.353225 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:42:02.365920 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:42:02.367554 systemd[1]: Starting ignition-files.service... Nov 1 00:42:02.388188 ignition[816]: INFO : Ignition 2.14.0 Nov 1 00:42:02.389128 ignition[816]: INFO : Stage: files Nov 1 00:42:02.389751 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:02.390391 ignition[816]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:02.393098 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:02.395683 ignition[816]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:42:02.397008 ignition[816]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:42:02.397689 ignition[816]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:42:02.401489 ignition[816]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:42:02.402372 ignition[816]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:42:02.404344 unknown[816]: wrote ssh authorized keys file for user: core Nov 1 00:42:02.405208 ignition[816]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:42:02.405938 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:02.405938 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:02.405938 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:02.405938 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:02.444905 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:42:02.510820 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:02.512617 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:42:02.513940 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:02.718453 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 1 00:42:02.822199 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:42:02.822199 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:02.823863 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:02.828767 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:42:03.031385 systemd-networkd[690]: eth0: Gained IPv6LL Nov 1 00:42:03.042893 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 1 00:42:03.318605 ignition[816]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:03.318605 ignition[816]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:03.318605 ignition[816]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:03.318605 ignition[816]: INFO : files: op(e): [started] processing unit "containerd.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(e): [finished] processing unit "containerd.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:03.321899 ignition[816]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:03.341629 kernel: kauditd_printk_skb: 29 callbacks suppressed Nov 1 00:42:03.341665 kernel: audit: type=1130 audit(1761957723.328:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.327943 systemd[1]: Finished ignition-files.service. Nov 1 00:42:03.349335 kernel: audit: type=1130 audit(1761957723.343:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.349391 kernel: audit: type=1131 audit(1761957723.343:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.349542 ignition[816]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:03.349542 ignition[816]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:03.349542 ignition[816]: INFO : files: files passed Nov 1 00:42:03.349542 ignition[816]: INFO : Ignition finished successfully Nov 1 00:42:03.357544 kernel: audit: type=1130 audit(1761957723.351:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.330927 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:42:03.336621 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:42:03.359569 initrd-setup-root-after-ignition[841]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:42:03.338268 systemd[1]: Starting ignition-quench.service... Nov 1 00:42:03.342641 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:42:03.342780 systemd[1]: Finished ignition-quench.service. Nov 1 00:42:03.345516 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:42:03.351412 systemd[1]: Reached target ignition-complete.target. Nov 1 00:42:03.357258 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:42:03.378133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:42:03.379246 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:42:03.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.380925 systemd[1]: Reached target initrd-fs.target. Nov 1 00:42:03.388485 kernel: audit: type=1130 audit(1761957723.380:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.388521 kernel: audit: type=1131 audit(1761957723.380:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.388868 systemd[1]: Reached target initrd.target. Nov 1 00:42:03.389472 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:42:03.390820 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:42:03.405686 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:42:03.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.408399 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:42:03.417219 kernel: audit: type=1130 audit(1761957723.406:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.424335 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:42:03.425035 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:42:03.425963 systemd[1]: Stopped target timers.target. Nov 1 00:42:03.426785 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:42:03.431746 kernel: audit: type=1131 audit(1761957723.427:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.426960 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:42:03.427810 systemd[1]: Stopped target initrd.target. Nov 1 00:42:03.432559 systemd[1]: Stopped target basic.target. Nov 1 00:42:03.433860 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:42:03.434784 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:42:03.435809 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:42:03.436761 systemd[1]: Stopped target remote-fs.target. Nov 1 00:42:03.437812 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:42:03.438803 systemd[1]: Stopped target sysinit.target. Nov 1 00:42:03.439926 systemd[1]: Stopped target local-fs.target. Nov 1 00:42:03.440886 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:42:03.441775 systemd[1]: Stopped target swap.target. Nov 1 00:42:03.442666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:42:03.447801 kernel: audit: type=1131 audit(1761957723.443:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.442892 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:42:03.443904 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:42:03.448450 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:42:03.453658 kernel: audit: type=1131 audit(1761957723.449:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.448690 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:42:03.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.449978 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:42:03.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.450189 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:42:03.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.454401 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:42:03.454566 systemd[1]: Stopped ignition-files.service. Nov 1 00:42:03.455322 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:42:03.455475 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:42:03.457517 systemd[1]: Stopping ignition-mount.service... Nov 1 00:42:03.459574 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:42:03.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.469438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:42:03.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.469672 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:42:03.470426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:42:03.470525 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:42:03.474068 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:42:03.474235 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:42:03.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.476250 ignition[854]: INFO : Ignition 2.14.0 Nov 1 00:42:03.476250 ignition[854]: INFO : Stage: umount Nov 1 00:42:03.476250 ignition[854]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:03.476250 ignition[854]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:42:03.480363 ignition[854]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:42:03.482718 ignition[854]: INFO : umount: umount passed Nov 1 00:42:03.483378 ignition[854]: INFO : Ignition finished successfully Nov 1 00:42:03.485124 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:42:03.485254 systemd[1]: Stopped ignition-mount.service. Nov 1 00:42:03.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.486318 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:42:03.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.486381 systemd[1]: Stopped ignition-disks.service. Nov 1 00:42:03.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.486956 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:42:03.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.487018 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:42:03.487865 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:42:03.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.487935 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:42:03.488650 systemd[1]: Stopped target network.target. Nov 1 00:42:03.489468 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:42:03.489523 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:42:03.490436 systemd[1]: Stopped target paths.target. Nov 1 00:42:03.491220 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:42:03.495171 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:42:03.505461 systemd[1]: Stopped target slices.target. Nov 1 00:42:03.510434 systemd[1]: Stopped target sockets.target. Nov 1 00:42:03.524030 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:42:03.524115 systemd[1]: Closed iscsid.socket. Nov 1 00:42:03.532798 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:42:03.532859 systemd[1]: Closed iscsiuio.socket. Nov 1 00:42:03.540781 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:42:03.540889 systemd[1]: Stopped ignition-setup.service. Nov 1 00:42:03.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.542248 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:42:03.543225 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:42:03.546720 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:42:03.547212 systemd-networkd[690]: eth0: DHCPv6 lease lost Nov 1 00:42:03.547649 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:42:03.547761 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:42:03.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.549348 systemd-networkd[690]: eth1: DHCPv6 lease lost Nov 1 00:42:03.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.549740 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:42:03.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.549832 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:42:03.552000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:42:03.551422 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:42:03.551592 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:42:03.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.555000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:42:03.552771 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:42:03.552805 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:42:03.553567 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:42:03.553615 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:42:03.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.556010 systemd[1]: Stopping network-cleanup.service... Nov 1 00:42:03.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.556795 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:42:03.556870 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:42:03.557720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:03.557764 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:03.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.560266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:42:03.560315 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:42:03.561477 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:42:03.567852 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:42:03.571391 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:42:03.571581 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:42:03.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.573162 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:42:03.573276 systemd[1]: Stopped network-cleanup.service. Nov 1 00:42:03.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.574503 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:42:03.574553 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:42:03.575092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:42:03.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.575131 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:42:03.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.576061 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:42:03.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.576130 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:42:03.576856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:42:03.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.576894 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:42:03.585199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:42:03.585281 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:42:03.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:03.586975 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:42:03.589411 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:42:03.589497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:42:03.590098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:42:03.590155 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:42:03.591152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:42:03.591195 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:42:03.593003 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:42:03.595558 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:42:03.595667 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:42:03.596324 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:42:03.597680 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:42:03.606920 systemd[1]: Switching root. Nov 1 00:42:03.609000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:42:03.609000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:42:03.609000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:42:03.610000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:42:03.610000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:42:03.631014 iscsid[700]: iscsid shutting down. Nov 1 00:42:03.631673 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Nov 1 00:42:03.631755 systemd-journald[184]: Journal stopped Nov 1 00:42:07.071812 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:42:07.071899 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:42:07.071919 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:42:07.071938 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:42:07.071950 kernel: SELinux: policy capability open_perms=1 Nov 1 00:42:07.071962 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:42:07.071979 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:42:07.072276 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:42:07.072293 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:42:07.072305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:42:07.072321 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:42:07.072335 systemd[1]: Successfully loaded SELinux policy in 55.363ms. Nov 1 00:42:07.072361 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.331ms. Nov 1 00:42:07.072376 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:07.072389 systemd[1]: Detected virtualization kvm. Nov 1 00:42:07.072402 systemd[1]: Detected architecture x86-64. Nov 1 00:42:07.072413 systemd[1]: Detected first boot. Nov 1 00:42:07.072425 systemd[1]: Hostname set to . Nov 1 00:42:07.072442 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:07.072454 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:42:07.072466 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:42:07.072479 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:07.072497 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:07.072512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:07.072531 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:42:07.072548 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:42:07.072562 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:42:07.072576 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:42:07.072589 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:42:07.072602 systemd[1]: Created slice system-getty.slice. Nov 1 00:42:07.072614 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:42:07.072631 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:42:07.072644 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:42:07.072656 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:42:07.072673 systemd[1]: Created slice user.slice. Nov 1 00:42:07.072686 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:07.072698 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:42:07.072710 systemd[1]: Set up automount boot.automount. Nov 1 00:42:07.072724 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:42:07.072736 systemd[1]: Reached target integritysetup.target. Nov 1 00:42:07.072749 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:07.072764 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:07.072776 systemd[1]: Reached target slices.target. Nov 1 00:42:07.072789 systemd[1]: Reached target swap.target. Nov 1 00:42:07.072801 systemd[1]: Reached target torcx.target. Nov 1 00:42:07.072814 systemd[1]: Reached target veritysetup.target. Nov 1 00:42:07.072843 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:42:07.072856 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:42:07.072869 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:07.072881 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:07.072897 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:07.072910 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:07.072923 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:07.072935 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:07.072947 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:42:07.072960 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:42:07.072973 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:42:07.072986 systemd[1]: Mounting media.mount... Nov 1 00:42:07.072998 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:07.073014 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:42:07.073031 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:42:07.073043 systemd[1]: Mounting tmp.mount... Nov 1 00:42:07.073055 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:42:07.073068 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:07.073089 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:07.073101 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:42:07.073114 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:07.073128 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:07.073145 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:07.073157 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:42:07.073889 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:07.073908 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:42:07.073920 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:42:07.073933 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:42:07.073946 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:07.073959 kernel: fuse: init (API version 7.34) Nov 1 00:42:07.073973 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:07.073992 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:42:07.074005 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:42:07.074018 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:07.074032 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:07.074045 kernel: loop: module loaded Nov 1 00:42:07.074057 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:42:07.074078 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:42:07.074092 systemd[1]: Mounted media.mount. Nov 1 00:42:07.074138 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:42:07.074156 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:42:07.074170 systemd[1]: Mounted tmp.mount. Nov 1 00:42:07.074186 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:07.074199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:42:07.074211 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:42:07.074224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:07.074236 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:07.074248 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:07.074261 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:07.074276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:07.074289 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:07.074301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:42:07.074315 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:42:07.074327 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:07.074343 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:07.074361 systemd-journald[994]: Journal started Nov 1 00:42:07.074449 systemd-journald[994]: Runtime Journal (/run/log/journal/5cac1e69b18645fc9437a4a2a35142a1) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:42:06.825000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:06.825000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:42:07.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.067000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:42:07.067000 audit[994]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd6b13c80 a2=4000 a3=7fffd6b13d1c items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:07.067000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:42:07.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.080163 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:07.080219 systemd[1]: Started systemd-journald.service. Nov 1 00:42:07.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.083659 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:42:07.084489 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:42:07.085406 systemd[1]: Reached target network-pre.target. Nov 1 00:42:07.088767 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:42:07.090714 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:42:07.091310 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:42:07.094854 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:42:07.096735 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:42:07.100907 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:07.104596 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:42:07.105696 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:07.107790 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:07.111217 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:42:07.113789 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:42:07.131017 systemd-journald[994]: Time spent on flushing to /var/log/journal/5cac1e69b18645fc9437a4a2a35142a1 is 62.672ms for 1087 entries. Nov 1 00:42:07.131017 systemd-journald[994]: System Journal (/var/log/journal/5cac1e69b18645fc9437a4a2a35142a1) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:42:07.200254 systemd-journald[994]: Received client request to flush runtime journal. Nov 1 00:42:07.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.149230 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:42:07.149851 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:42:07.178707 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:07.181418 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:42:07.184482 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:42:07.206381 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:42:07.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.224885 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:42:07.226953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:07.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.230792 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:07.232771 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:42:07.247103 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:42:07.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.265195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:07.773912 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:42:07.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.776159 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:07.802880 systemd-udevd[1053]: Using default interface naming scheme 'v252'. Nov 1 00:42:07.827670 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:07.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.830498 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:07.843582 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:42:07.914225 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:07.914482 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:07.916126 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:07.920267 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:07.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.924522 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:07.925108 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:42:07.925199 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:42:07.925309 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:07.925875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:07.926123 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:07.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.933004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:07.933214 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:07.933906 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:07.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.935041 systemd[1]: Started systemd-userdbd.service. Nov 1 00:42:07.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.945527 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:07.945744 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:07.946357 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:07.986524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:08.013382 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:42:08.085487 systemd-networkd[1060]: lo: Link UP Nov 1 00:42:08.085501 systemd-networkd[1060]: lo: Gained carrier Nov 1 00:42:08.086092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:42:08.088324 systemd-networkd[1060]: Enumeration completed Nov 1 00:42:08.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.088452 systemd-networkd[1060]: eth1: Configuring with /run/systemd/network/10-9e:62:92:39:29:41.network. Nov 1 00:42:08.088509 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:08.090333 systemd-networkd[1060]: eth0: Configuring with /run/systemd/network/10-26:97:40:e7:d0:bf.network. Nov 1 00:42:08.091435 systemd-networkd[1060]: eth1: Link UP Nov 1 00:42:08.091448 systemd-networkd[1060]: eth1: Gained carrier Nov 1 00:42:08.095648 systemd-networkd[1060]: eth0: Link UP Nov 1 00:42:08.095662 systemd-networkd[1060]: eth0: Gained carrier Nov 1 00:42:08.145103 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:42:08.134000 audit[1064]: AVC avc: denied { confidentiality } for pid=1064 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:42:08.134000 audit[1064]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55caa59ea040 a1=338ec a2=7f7eee3d7bc5 a3=5 items=110 ppid=1053 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:08.134000 audit: CWD cwd="/" Nov 1 00:42:08.134000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=1 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=2 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=3 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=4 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=5 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=6 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=7 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=8 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=9 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=10 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=11 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=12 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=13 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=14 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=15 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=16 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=17 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=18 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=19 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=20 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=21 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=22 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=23 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=24 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=25 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=26 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=27 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=28 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=29 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=30 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=31 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=32 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=33 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=34 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=35 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=36 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=37 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=38 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=39 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=40 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=41 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=42 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=43 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=44 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=45 name=(null) inode=14765 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=46 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=47 name=(null) inode=14766 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=48 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=49 name=(null) inode=14767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=50 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=51 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=52 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=53 name=(null) inode=14769 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=55 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=56 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=57 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=58 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=59 name=(null) inode=14772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=60 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=61 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=62 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=63 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=64 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=65 name=(null) inode=14775 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=66 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=67 name=(null) inode=14776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=68 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=69 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=70 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=71 name=(null) inode=14778 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=72 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=73 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=74 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=75 name=(null) inode=14780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=76 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=77 name=(null) inode=14781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=78 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=79 name=(null) inode=14782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=80 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=81 name=(null) inode=14783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=82 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=83 name=(null) inode=14784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=84 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=85 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=86 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=87 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=88 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=89 name=(null) inode=14787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=90 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=91 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=92 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=93 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=94 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=95 name=(null) inode=14790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=96 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=97 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=98 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=99 name=(null) inode=14792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=100 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=101 name=(null) inode=14793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=102 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=103 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=104 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=105 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=106 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=107 name=(null) inode=14796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PATH item=109 name=(null) inode=14797 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:08.134000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:42:08.186106 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:42:08.193095 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:42:08.193198 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:42:08.352112 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:42:08.384242 kernel: kauditd_printk_skb: 201 callbacks suppressed Nov 1 00:42:08.384403 kernel: audit: type=1130 audit(1761957728.380:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.380067 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:42:08.383102 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:42:08.410459 lvm[1097]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:08.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.445911 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:42:08.446598 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:08.451152 kernel: audit: type=1130 audit(1761957728.446:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.453254 systemd[1]: Starting lvm2-activation.service... Nov 1 00:42:08.460409 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:08.489039 systemd[1]: Finished lvm2-activation.service. Nov 1 00:42:08.489783 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:08.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.496119 kernel: audit: type=1130 audit(1761957728.489:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.496115 systemd[1]: Mounting media-configdrive.mount... Nov 1 00:42:08.496694 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:42:08.496765 systemd[1]: Reached target machines.target. Nov 1 00:42:08.498936 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:42:08.513625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:42:08.519096 kernel: audit: type=1130 audit(1761957728.514:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.527093 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:42:08.529950 systemd[1]: Mounted media-configdrive.mount. Nov 1 00:42:08.530978 systemd[1]: Reached target local-fs.target. Nov 1 00:42:08.534005 systemd[1]: Starting ldconfig.service... Nov 1 00:42:08.536245 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:08.536922 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:08.539570 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:42:08.543693 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:42:08.550371 systemd[1]: Starting systemd-sysext.service... Nov 1 00:42:08.560161 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Nov 1 00:42:08.562532 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:42:08.583185 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:42:08.603196 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:08.603657 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:42:08.616905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:42:08.619541 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:42:08.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.629226 kernel: audit: type=1130 audit(1761957728.624:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.640199 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:42:08.692472 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:42:08.707798 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) Nov 1 00:42:08.707798 systemd-fsck[1119]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:42:08.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.716159 kernel: audit: type=1130 audit(1761957728.710:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.710232 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:42:08.713361 systemd[1]: Mounting boot.mount... Nov 1 00:42:08.724130 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:42:08.737385 systemd[1]: Mounted boot.mount. Nov 1 00:42:08.758440 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:42:08.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.764165 kernel: audit: type=1130 audit(1761957728.759:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.770140 (sd-sysext)[1123]: Using extensions 'kubernetes'. Nov 1 00:42:08.774187 (sd-sysext)[1123]: Merged extensions into '/usr'. Nov 1 00:42:08.814330 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:08.820158 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:42:08.821826 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:08.825119 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:08.829756 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:08.834165 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:08.842290 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:08.843026 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:08.843620 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:08.858182 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:42:08.867303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:08.867687 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:08.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.875134 kernel: audit: type=1130 audit(1761957728.869:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.869820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:08.870128 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:08.876037 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:08.876357 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:08.880308 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:08.880948 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:08.881577 systemd[1]: Finished systemd-sysext.service. Nov 1 00:42:08.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.889107 kernel: audit: type=1131 audit(1761957728.869:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.897210 systemd[1]: Starting ensure-sysext.service... Nov 1 00:42:08.902312 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:42:08.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.916136 kernel: audit: type=1130 audit(1761957728.875:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.913908 systemd[1]: Reloading. Nov 1 00:42:08.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.934539 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:42:08.942749 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:42:08.951808 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:42:09.041249 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:42:09.092640 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-11-01T00:42:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:09.096751 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-11-01T00:42:09Z" level=info msg="torcx already run" Nov 1 00:42:09.218576 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:09.218600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:09.240909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:09.303309 systemd-networkd[1060]: eth0: Gained IPv6LL Nov 1 00:42:09.305413 systemd[1]: Finished ldconfig.service. Nov 1 00:42:09.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.307686 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:42:09.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.311631 systemd[1]: Starting audit-rules.service... Nov 1 00:42:09.314202 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:42:09.324015 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:42:09.327581 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:09.330444 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:42:09.336275 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:42:09.338318 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:42:09.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.347066 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.354589 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:09.358164 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:09.361758 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:09.362416 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.362610 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:09.362794 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:09.364015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:09.365142 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:09.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.370000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.370834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:09.371124 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:09.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.375900 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.380662 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:09.384567 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:09.385768 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.387416 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:09.387842 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:09.393228 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:09.393532 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:09.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.400618 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:42:09.406608 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.411449 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:09.416503 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:09.418702 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.418982 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:09.423665 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:42:09.424551 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:09.428056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:09.428464 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:09.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.430688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:09.435685 systemd[1]: Finished ensure-sysext.service. Nov 1 00:42:09.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.441215 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:42:09.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.444544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:09.444865 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:09.449351 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:09.449693 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:09.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.457399 systemd[1]: Starting systemd-update-done.service... Nov 1 00:42:09.465834 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:42:09.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.467981 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:09.468225 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:09.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.468857 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.486415 systemd[1]: Finished systemd-update-done.service. Nov 1 00:42:09.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:09.497137 augenrules[1260]: No rules Nov 1 00:42:09.496000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:42:09.496000 audit[1260]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc030d840 a2=420 a3=0 items=0 ppid=1217 pid=1260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:09.496000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:42:09.498812 systemd[1]: Finished audit-rules.service. Nov 1 00:42:09.529125 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:09.529154 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:09.542896 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:42:09.543785 systemd[1]: Reached target time-set.target. Nov 1 00:42:09.561924 systemd-resolved[1222]: Positive Trust Anchors: Nov 1 00:42:09.561944 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:09.561976 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:09.568461 systemd-resolved[1222]: Using system hostname 'ci-3510.3.8-n-14edb40b39'. Nov 1 00:42:09.570841 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:09.571596 systemd[1]: Reached target network.target. Nov 1 00:42:09.572035 systemd[1]: Reached target network-online.target. Nov 1 00:42:09.572537 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:09.572955 systemd[1]: Reached target sysinit.target. Nov 1 00:42:09.573464 systemd[1]: Started motdgen.path. Nov 1 00:42:09.573886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:42:09.574567 systemd[1]: Started logrotate.timer. Nov 1 00:42:09.575053 systemd[1]: Started mdadm.timer. Nov 1 00:42:09.575449 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:42:09.575906 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:42:09.575938 systemd[1]: Reached target paths.target. Nov 1 00:42:09.576333 systemd[1]: Reached target timers.target. Nov 1 00:42:09.577361 systemd[1]: Listening on dbus.socket. Nov 1 00:42:09.580038 systemd[1]: Starting docker.socket... Nov 1 00:42:09.582370 systemd[1]: Listening on sshd.socket. Nov 1 00:42:09.582939 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:09.583427 systemd[1]: Listening on docker.socket. Nov 1 00:42:09.583958 systemd[1]: Reached target sockets.target. Nov 1 00:42:09.584400 systemd[1]: Reached target basic.target. Nov 1 00:42:09.585028 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:42:09.585098 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.585129 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:09.586802 systemd[1]: Starting containerd.service... Nov 1 00:42:09.589283 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:42:09.591435 systemd[1]: Starting dbus.service... Nov 1 00:42:09.595419 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:42:09.598753 systemd[1]: Starting extend-filesystems.service... Nov 1 00:42:09.599707 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:42:09.601851 systemd[1]: Starting kubelet.service... Nov 1 00:42:09.604209 systemd[1]: Starting motdgen.service... Nov 1 00:42:09.611738 jq[1274]: false Nov 1 00:42:09.615364 systemd[1]: Starting prepare-helm.service... Nov 1 00:42:09.618339 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:42:09.624738 systemd[1]: Starting sshd-keygen.service... Nov 1 00:42:09.630396 systemd[1]: Starting systemd-logind.service... Nov 1 00:42:09.632424 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:09.632548 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:42:09.637273 systemd[1]: Starting update-engine.service... Nov 1 00:42:09.639611 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:42:09.643903 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:42:09.645035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:42:09.659344 jq[1288]: true Nov 1 00:42:09.679127 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:42:09.679449 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:42:09.690312 dbus-daemon[1271]: [system] SELinux support is enabled Nov 1 00:42:09.690979 systemd[1]: Started dbus.service. Nov 1 00:42:09.693765 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:42:09.693814 systemd[1]: Reached target system-config.target. Nov 1 00:42:09.694437 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:42:09.694459 systemd[1]: Reached target user-config.target. Nov 1 00:42:09.700005 tar[1293]: linux-amd64/LICENSE Nov 1 00:42:09.700644 tar[1293]: linux-amd64/helm Nov 1 00:42:09.710342 jq[1299]: true Nov 1 00:42:09.730886 extend-filesystems[1276]: Found loop1 Nov 1 00:42:09.733320 extend-filesystems[1276]: Found vda Nov 1 00:42:09.744862 extend-filesystems[1276]: Found vda1 Nov 1 00:42:09.750273 extend-filesystems[1276]: Found vda2 Nov 1 00:42:09.750975 extend-filesystems[1276]: Found vda3 Nov 1 00:42:09.751527 extend-filesystems[1276]: Found usr Nov 1 00:42:09.752068 extend-filesystems[1276]: Found vda4 Nov 1 00:42:09.752595 extend-filesystems[1276]: Found vda6 Nov 1 00:42:09.753434 extend-filesystems[1276]: Found vda7 Nov 1 00:42:09.753434 extend-filesystems[1276]: Found vda9 Nov 1 00:42:09.753434 extend-filesystems[1276]: Checking size of /dev/vda9 Nov 1 00:42:09.758948 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:42:09.759281 systemd[1]: Finished motdgen.service. Nov 1 00:42:09.763628 systemd-timesyncd[1223]: Contacted time server 72.14.183.39:123 (0.flatcar.pool.ntp.org). Nov 1 00:42:09.763893 systemd-timesyncd[1223]: Initial clock synchronization to Sat 2025-11-01 00:42:10.038706 UTC. Nov 1 00:42:09.795680 extend-filesystems[1276]: Resized partition /dev/vda9 Nov 1 00:42:09.809922 extend-filesystems[1330]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:42:09.813829 update_engine[1287]: I1101 00:42:09.813194 1287 main.cc:92] Flatcar Update Engine starting Nov 1 00:42:09.815233 systemd-networkd[1060]: eth1: Gained IPv6LL Nov 1 00:42:09.818117 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:42:09.819352 systemd[1]: Started update-engine.service. Nov 1 00:42:09.819729 update_engine[1287]: I1101 00:42:09.819406 1287 update_check_scheduler.cc:74] Next update check in 2m38s Nov 1 00:42:09.822179 systemd[1]: Started locksmithd.service. Nov 1 00:42:09.859894 bash[1334]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:09.861406 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:42:09.914869 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:42:09.935958 extend-filesystems[1330]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:42:09.935958 extend-filesystems[1330]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:42:09.935958 extend-filesystems[1330]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:42:09.938493 extend-filesystems[1276]: Resized filesystem in /dev/vda9 Nov 1 00:42:09.938493 extend-filesystems[1276]: Found vdb Nov 1 00:42:09.936609 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:42:09.937049 systemd[1]: Finished extend-filesystems.service. Nov 1 00:42:09.977059 systemd-logind[1286]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:42:09.978278 systemd-logind[1286]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:42:09.978707 systemd-logind[1286]: New seat seat0. Nov 1 00:42:09.985211 env[1294]: time="2025-11-01T00:42:09.984203467Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:42:09.986918 systemd[1]: Started systemd-logind.service. Nov 1 00:42:09.994105 coreos-metadata[1270]: Nov 01 00:42:09.992 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:42:10.020256 coreos-metadata[1270]: Nov 01 00:42:10.019 INFO Fetch successful Nov 1 00:42:10.028066 unknown[1270]: wrote ssh authorized keys file for user: core Nov 1 00:42:10.039199 update-ssh-keys[1343]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:10.040365 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:42:10.062261 env[1294]: time="2025-11-01T00:42:10.062207588Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:42:10.063946 env[1294]: time="2025-11-01T00:42:10.063916748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.066004 env[1294]: time="2025-11-01T00:42:10.065960425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:10.066864 env[1294]: time="2025-11-01T00:42:10.066837143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.067348 env[1294]: time="2025-11-01T00:42:10.067321038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:10.069032 env[1294]: time="2025-11-01T00:42:10.068995028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.069207 env[1294]: time="2025-11-01T00:42:10.069184126Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:42:10.069292 env[1294]: time="2025-11-01T00:42:10.069276314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.071992 env[1294]: time="2025-11-01T00:42:10.071951415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.072436 env[1294]: time="2025-11-01T00:42:10.072409691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:10.073261 env[1294]: time="2025-11-01T00:42:10.073230024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:10.073375 env[1294]: time="2025-11-01T00:42:10.073357475Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:42:10.073510 env[1294]: time="2025-11-01T00:42:10.073493079Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:42:10.073577 env[1294]: time="2025-11-01T00:42:10.073563618Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.077909165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.077968167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.077982170Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078037577Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078054072Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078095504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078111042Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078126855Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078141691Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078185919Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078200242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078213038Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078424047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:42:10.079551 env[1294]: time="2025-11-01T00:42:10.078519487Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079027400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079085931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079102807Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079157666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079171244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079183846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079195219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079219782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079232637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079244750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079256864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079272049Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079428382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079443815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080282 env[1294]: time="2025-11-01T00:42:10.079467636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.080633 env[1294]: time="2025-11-01T00:42:10.079479498Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:42:10.080633 env[1294]: time="2025-11-01T00:42:10.079494696Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:42:10.080633 env[1294]: time="2025-11-01T00:42:10.079506162Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:42:10.080633 env[1294]: time="2025-11-01T00:42:10.079756972Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:42:10.080633 env[1294]: time="2025-11-01T00:42:10.079809574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:42:10.081531 env[1294]: time="2025-11-01T00:42:10.080955139Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:42:10.081531 env[1294]: time="2025-11-01T00:42:10.081027903Z" level=info msg="Connect containerd service" Nov 1 00:42:10.081531 env[1294]: time="2025-11-01T00:42:10.081068082Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:42:10.083573 env[1294]: time="2025-11-01T00:42:10.081966326Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:42:10.083681 env[1294]: time="2025-11-01T00:42:10.083661694Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:42:10.085656 env[1294]: time="2025-11-01T00:42:10.085634244Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:42:10.085840 env[1294]: time="2025-11-01T00:42:10.085815112Z" level=info msg="containerd successfully booted in 0.165829s" Nov 1 00:42:10.085996 systemd[1]: Started containerd.service. Nov 1 00:42:10.087773 env[1294]: time="2025-11-01T00:42:10.087713090Z" level=info msg="Start subscribing containerd event" Nov 1 00:42:10.089187 env[1294]: time="2025-11-01T00:42:10.089144115Z" level=info msg="Start recovering state" Nov 1 00:42:10.089272 env[1294]: time="2025-11-01T00:42:10.089255070Z" level=info msg="Start event monitor" Nov 1 00:42:10.089306 env[1294]: time="2025-11-01T00:42:10.089280637Z" level=info msg="Start snapshots syncer" Nov 1 00:42:10.089306 env[1294]: time="2025-11-01T00:42:10.089293689Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:42:10.089306 env[1294]: time="2025-11-01T00:42:10.089301902Z" level=info msg="Start streaming server" Nov 1 00:42:10.864096 tar[1293]: linux-amd64/README.md Nov 1 00:42:10.876624 systemd[1]: Finished prepare-helm.service. Nov 1 00:42:11.001454 locksmithd[1335]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:42:11.335527 systemd[1]: Started kubelet.service. Nov 1 00:42:11.372298 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:42:11.402758 systemd[1]: Finished sshd-keygen.service. Nov 1 00:42:11.406006 systemd[1]: Starting issuegen.service... Nov 1 00:42:11.421875 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:42:11.422338 systemd[1]: Finished issuegen.service. Nov 1 00:42:11.425768 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:42:11.439463 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:42:11.442137 systemd[1]: Started getty@tty1.service. Nov 1 00:42:11.444838 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:42:11.445657 systemd[1]: Reached target getty.target. Nov 1 00:42:11.446685 systemd[1]: Reached target multi-user.target. Nov 1 00:42:11.449346 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:42:11.462557 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:42:11.462864 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:42:11.482293 systemd[1]: Startup finished in 5.905s (kernel) + 7.734s (userspace) = 13.640s. Nov 1 00:42:12.017512 kubelet[1361]: E1101 00:42:12.017451 1361 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:12.020053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:12.020242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:12.461984 systemd[1]: Created slice system-sshd.slice. Nov 1 00:42:12.464591 systemd[1]: Started sshd@0-64.23.181.132:22-139.178.89.65:57868.service. Nov 1 00:42:12.535176 sshd[1388]: Accepted publickey for core from 139.178.89.65 port 57868 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:12.538540 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:12.550739 systemd[1]: Created slice user-500.slice. Nov 1 00:42:12.552202 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:42:12.557952 systemd-logind[1286]: New session 1 of user core. Nov 1 00:42:12.567409 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:42:12.570424 systemd[1]: Starting user@500.service... Nov 1 00:42:12.578631 (systemd)[1393]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:12.668577 systemd[1393]: Queued start job for default target default.target. Nov 1 00:42:12.668874 systemd[1393]: Reached target paths.target. Nov 1 00:42:12.668892 systemd[1393]: Reached target sockets.target. Nov 1 00:42:12.668906 systemd[1393]: Reached target timers.target. Nov 1 00:42:12.668918 systemd[1393]: Reached target basic.target. Nov 1 00:42:12.669084 systemd[1]: Started user@500.service. Nov 1 00:42:12.670303 systemd[1]: Started session-1.scope. Nov 1 00:42:12.670676 systemd[1393]: Reached target default.target. Nov 1 00:42:12.670888 systemd[1393]: Startup finished in 83ms. Nov 1 00:42:12.736668 systemd[1]: Started sshd@1-64.23.181.132:22-139.178.89.65:57876.service. Nov 1 00:42:12.791471 sshd[1402]: Accepted publickey for core from 139.178.89.65 port 57876 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:12.793721 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:12.798995 systemd-logind[1286]: New session 2 of user core. Nov 1 00:42:12.799964 systemd[1]: Started session-2.scope. Nov 1 00:42:12.872905 sshd[1402]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:12.879510 systemd[1]: Started sshd@2-64.23.181.132:22-139.178.89.65:57884.service. Nov 1 00:42:12.881700 systemd[1]: sshd@1-64.23.181.132:22-139.178.89.65:57876.service: Deactivated successfully. Nov 1 00:42:12.883785 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:42:12.884600 systemd-logind[1286]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:42:12.886173 systemd-logind[1286]: Removed session 2. Nov 1 00:42:12.944448 sshd[1407]: Accepted publickey for core from 139.178.89.65 port 57884 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:12.946198 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:12.952429 systemd-logind[1286]: New session 3 of user core. Nov 1 00:42:12.953050 systemd[1]: Started session-3.scope. Nov 1 00:42:13.016359 sshd[1407]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:13.021625 systemd[1]: Started sshd@3-64.23.181.132:22-139.178.89.65:57890.service. Nov 1 00:42:13.022394 systemd[1]: sshd@2-64.23.181.132:22-139.178.89.65:57884.service: Deactivated successfully. Nov 1 00:42:13.023743 systemd-logind[1286]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:42:13.023836 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:42:13.030966 systemd-logind[1286]: Removed session 3. Nov 1 00:42:13.080380 sshd[1414]: Accepted publickey for core from 139.178.89.65 port 57890 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:13.082131 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:13.087667 systemd-logind[1286]: New session 4 of user core. Nov 1 00:42:13.088314 systemd[1]: Started session-4.scope. Nov 1 00:42:13.156342 sshd[1414]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:13.160498 systemd[1]: Started sshd@4-64.23.181.132:22-139.178.89.65:57892.service. Nov 1 00:42:13.163434 systemd[1]: sshd@3-64.23.181.132:22-139.178.89.65:57890.service: Deactivated successfully. Nov 1 00:42:13.165281 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:42:13.165938 systemd-logind[1286]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:42:13.167526 systemd-logind[1286]: Removed session 4. Nov 1 00:42:13.214906 sshd[1421]: Accepted publickey for core from 139.178.89.65 port 57892 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:42:13.217268 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:13.222621 systemd[1]: Started session-5.scope. Nov 1 00:42:13.223060 systemd-logind[1286]: New session 5 of user core. Nov 1 00:42:13.295762 sudo[1427]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:42:13.296560 sudo[1427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:13.340349 systemd[1]: Starting docker.service... Nov 1 00:42:13.397882 env[1437]: time="2025-11-01T00:42:13.397823773Z" level=info msg="Starting up" Nov 1 00:42:13.400205 env[1437]: time="2025-11-01T00:42:13.400159388Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:13.400205 env[1437]: time="2025-11-01T00:42:13.400187249Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:13.400362 env[1437]: time="2025-11-01T00:42:13.400223317Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:13.400362 env[1437]: time="2025-11-01T00:42:13.400237295Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:13.402256 env[1437]: time="2025-11-01T00:42:13.402226797Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:13.402388 env[1437]: time="2025-11-01T00:42:13.402371914Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:13.402458 env[1437]: time="2025-11-01T00:42:13.402442192Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:13.402515 env[1437]: time="2025-11-01T00:42:13.402503316Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:13.412749 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport123012805-merged.mount: Deactivated successfully. Nov 1 00:42:13.438914 env[1437]: time="2025-11-01T00:42:13.438863066Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:42:13.438914 env[1437]: time="2025-11-01T00:42:13.438892975Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:42:13.439206 env[1437]: time="2025-11-01T00:42:13.439159624Z" level=info msg="Loading containers: start." Nov 1 00:42:13.593126 kernel: Initializing XFRM netlink socket Nov 1 00:42:13.636964 env[1437]: time="2025-11-01T00:42:13.636910164Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:42:13.724244 systemd-networkd[1060]: docker0: Link UP Nov 1 00:42:13.739399 env[1437]: time="2025-11-01T00:42:13.739353616Z" level=info msg="Loading containers: done." Nov 1 00:42:13.754978 env[1437]: time="2025-11-01T00:42:13.754925552Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:42:13.755438 env[1437]: time="2025-11-01T00:42:13.755416961Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:42:13.755657 env[1437]: time="2025-11-01T00:42:13.755639882Z" level=info msg="Daemon has completed initialization" Nov 1 00:42:13.768598 systemd[1]: Started docker.service. Nov 1 00:42:13.778544 env[1437]: time="2025-11-01T00:42:13.778437692Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:42:13.804335 systemd[1]: Starting coreos-metadata.service... Nov 1 00:42:13.851426 coreos-metadata[1554]: Nov 01 00:42:13.851 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:42:13.877744 coreos-metadata[1554]: Nov 01 00:42:13.877 INFO Fetch successful Nov 1 00:42:13.896300 systemd[1]: Finished coreos-metadata.service. Nov 1 00:42:14.788467 env[1294]: time="2025-11-01T00:42:14.788420668Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:42:15.359766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846385010.mount: Deactivated successfully. Nov 1 00:42:16.887413 env[1294]: time="2025-11-01T00:42:16.887335515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.890879 env[1294]: time="2025-11-01T00:42:16.890823770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.893130 env[1294]: time="2025-11-01T00:42:16.893060756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:16.896479 env[1294]: time="2025-11-01T00:42:16.896412425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:42:16.897233 env[1294]: time="2025-11-01T00:42:16.897195688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:42:16.897407 env[1294]: time="2025-11-01T00:42:16.895381479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:18.579602 env[1294]: time="2025-11-01T00:42:18.579538392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:18.581547 env[1294]: time="2025-11-01T00:42:18.581498605Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:18.588050 env[1294]: time="2025-11-01T00:42:18.587994572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:18.589041 env[1294]: time="2025-11-01T00:42:18.589001369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:42:18.589754 env[1294]: time="2025-11-01T00:42:18.589723563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:42:18.591266 env[1294]: time="2025-11-01T00:42:18.591195012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:20.071516 env[1294]: time="2025-11-01T00:42:20.071447770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:20.073389 env[1294]: time="2025-11-01T00:42:20.073342463Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:20.075701 env[1294]: time="2025-11-01T00:42:20.075666598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:20.078674 env[1294]: time="2025-11-01T00:42:20.078614195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:20.081694 env[1294]: time="2025-11-01T00:42:20.080379737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:42:20.082858 env[1294]: time="2025-11-01T00:42:20.082796277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:42:21.413606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712264066.mount: Deactivated successfully. Nov 1 00:42:22.229563 env[1294]: time="2025-11-01T00:42:22.229468427Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:22.231147 env[1294]: time="2025-11-01T00:42:22.231100727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:22.233073 env[1294]: time="2025-11-01T00:42:22.233011419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:22.234725 env[1294]: time="2025-11-01T00:42:22.234661409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:22.235194 env[1294]: time="2025-11-01T00:42:22.235135403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:42:22.236433 env[1294]: time="2025-11-01T00:42:22.236328680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:42:22.271598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:42:22.271831 systemd[1]: Stopped kubelet.service. Nov 1 00:42:22.275758 systemd[1]: Starting kubelet.service... Nov 1 00:42:22.401922 systemd[1]: Started kubelet.service. Nov 1 00:42:22.466497 kubelet[1584]: E1101 00:42:22.466413 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:22.470101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:22.470275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:22.648614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518723104.mount: Deactivated successfully. Nov 1 00:42:23.653651 env[1294]: time="2025-11-01T00:42:23.653561080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:23.655631 env[1294]: time="2025-11-01T00:42:23.655574222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:23.657975 env[1294]: time="2025-11-01T00:42:23.657927158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:23.660748 env[1294]: time="2025-11-01T00:42:23.660689361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:23.662378 env[1294]: time="2025-11-01T00:42:23.662315345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:42:23.663312 env[1294]: time="2025-11-01T00:42:23.663268884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:42:24.080397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904835247.mount: Deactivated successfully. Nov 1 00:42:24.084743 env[1294]: time="2025-11-01T00:42:24.084688495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:24.086838 env[1294]: time="2025-11-01T00:42:24.086793770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:24.088974 env[1294]: time="2025-11-01T00:42:24.088929766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:24.090843 env[1294]: time="2025-11-01T00:42:24.090803439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:24.092231 env[1294]: time="2025-11-01T00:42:24.091659699Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:42:24.092894 env[1294]: time="2025-11-01T00:42:24.092862445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:42:24.594035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238572813.mount: Deactivated successfully. Nov 1 00:42:26.899187 env[1294]: time="2025-11-01T00:42:26.899098274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:26.901006 env[1294]: time="2025-11-01T00:42:26.900959514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:26.903176 env[1294]: time="2025-11-01T00:42:26.903135721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:26.905174 env[1294]: time="2025-11-01T00:42:26.905049656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:26.905966 env[1294]: time="2025-11-01T00:42:26.905928613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:42:29.679640 systemd[1]: Stopped kubelet.service. Nov 1 00:42:29.684377 systemd[1]: Starting kubelet.service... Nov 1 00:42:29.725828 systemd[1]: Reloading. Nov 1 00:42:29.873032 /usr/lib/systemd/system-generators/torcx-generator[1635]: time="2025-11-01T00:42:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:29.873067 /usr/lib/systemd/system-generators/torcx-generator[1635]: time="2025-11-01T00:42:29Z" level=info msg="torcx already run" Nov 1 00:42:30.013159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:30.013196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:30.039598 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:30.153692 systemd[1]: Started kubelet.service. Nov 1 00:42:30.157368 systemd[1]: Stopping kubelet.service... Nov 1 00:42:30.158792 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:42:30.159100 systemd[1]: Stopped kubelet.service. Nov 1 00:42:30.161813 systemd[1]: Starting kubelet.service... Nov 1 00:42:30.292973 systemd[1]: Started kubelet.service. Nov 1 00:42:30.368337 kubelet[1703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:30.368931 kubelet[1703]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:30.369033 kubelet[1703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:30.369343 kubelet[1703]: I1101 00:42:30.369284 1703 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:30.689039 kubelet[1703]: I1101 00:42:30.688414 1703 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:42:30.689325 kubelet[1703]: I1101 00:42:30.689285 1703 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:30.689767 kubelet[1703]: I1101 00:42:30.689740 1703 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:42:30.729763 kubelet[1703]: E1101 00:42:30.729690 1703 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.181.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:30.733414 kubelet[1703]: I1101 00:42:30.733364 1703 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:30.744352 kubelet[1703]: E1101 00:42:30.744311 1703 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:30.744616 kubelet[1703]: I1101 00:42:30.744597 1703 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:30.749976 kubelet[1703]: I1101 00:42:30.749937 1703 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:30.752710 kubelet[1703]: I1101 00:42:30.752623 1703 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:30.753245 kubelet[1703]: I1101 00:42:30.752928 1703 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-14edb40b39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:42:30.753520 kubelet[1703]: I1101 00:42:30.753499 1703 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:30.753638 kubelet[1703]: I1101 00:42:30.753623 1703 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:42:30.753900 kubelet[1703]: I1101 00:42:30.753884 1703 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:30.758396 kubelet[1703]: I1101 00:42:30.758354 1703 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:42:30.763742 kubelet[1703]: I1101 00:42:30.763689 1703 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:30.764000 kubelet[1703]: I1101 00:42:30.763982 1703 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:42:30.764103 kubelet[1703]: I1101 00:42:30.764087 1703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:30.776781 kubelet[1703]: W1101 00:42:30.776619 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.181.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-14edb40b39&limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:30.776971 kubelet[1703]: E1101 00:42:30.776804 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.181.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-14edb40b39&limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:30.776971 kubelet[1703]: I1101 00:42:30.776923 1703 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:30.777383 kubelet[1703]: I1101 00:42:30.777363 1703 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:42:30.786739 kubelet[1703]: W1101 00:42:30.786673 1703 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:42:30.794832 kubelet[1703]: I1101 00:42:30.794782 1703 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:30.794832 kubelet[1703]: I1101 00:42:30.794842 1703 server.go:1287] "Started kubelet" Nov 1 00:42:30.800429 kubelet[1703]: W1101 00:42:30.800360 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.181.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:30.800695 kubelet[1703]: E1101 00:42:30.800665 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.181.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:30.801015 kubelet[1703]: I1101 00:42:30.800968 1703 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:30.805548 kubelet[1703]: I1101 00:42:30.805456 1703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:30.806139 kubelet[1703]: I1101 00:42:30.806109 1703 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:30.809001 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:42:30.809311 kubelet[1703]: I1101 00:42:30.809291 1703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:30.812541 kubelet[1703]: E1101 00:42:30.811279 1703 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.181.132:6443/api/v1/namespaces/default/events\": dial tcp 64.23.181.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-14edb40b39.1873bb3d355f1425 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-14edb40b39,UID:ci-3510.3.8-n-14edb40b39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-14edb40b39,},FirstTimestamp:2025-11-01 00:42:30.794818597 +0000 UTC m=+0.491434018,LastTimestamp:2025-11-01 00:42:30.794818597 +0000 UTC m=+0.491434018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-14edb40b39,}" Nov 1 00:42:30.814576 kubelet[1703]: I1101 00:42:30.814551 1703 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:42:30.817713 kubelet[1703]: E1101 00:42:30.817689 1703 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:30.818110 kubelet[1703]: I1101 00:42:30.818068 1703 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:30.820449 kubelet[1703]: I1101 00:42:30.820418 1703 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:30.821235 kubelet[1703]: E1101 00:42:30.820704 1703 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-14edb40b39\" not found" Nov 1 00:42:30.821332 kubelet[1703]: I1101 00:42:30.821299 1703 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:30.821372 kubelet[1703]: I1101 00:42:30.821358 1703 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:30.822246 kubelet[1703]: I1101 00:42:30.822219 1703 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:42:30.822339 kubelet[1703]: I1101 00:42:30.822317 1703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:30.822677 kubelet[1703]: W1101 00:42:30.822627 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.181.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:30.822744 kubelet[1703]: E1101 00:42:30.822693 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.181.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:30.822826 kubelet[1703]: E1101 00:42:30.822782 1703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.181.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-14edb40b39?timeout=10s\": dial tcp 64.23.181.132:6443: connect: connection refused" interval="200ms" Nov 1 00:42:30.824498 kubelet[1703]: I1101 00:42:30.824467 1703 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:42:30.852183 kubelet[1703]: I1101 00:42:30.852069 1703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:30.854006 kubelet[1703]: I1101 00:42:30.853968 1703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:30.854006 kubelet[1703]: I1101 00:42:30.854002 1703 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:42:30.854186 kubelet[1703]: I1101 00:42:30.854027 1703 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:30.854186 kubelet[1703]: I1101 00:42:30.854035 1703 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:42:30.854186 kubelet[1703]: E1101 00:42:30.854110 1703 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:30.859018 kubelet[1703]: W1101 00:42:30.858767 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.181.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:30.859018 kubelet[1703]: E1101 00:42:30.858855 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.181.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:30.865910 kubelet[1703]: I1101 00:42:30.865884 1703 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:30.866067 kubelet[1703]: I1101 00:42:30.866054 1703 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:30.866163 kubelet[1703]: I1101 00:42:30.866152 1703 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:30.868235 kubelet[1703]: I1101 00:42:30.868209 1703 policy_none.go:49] "None policy: Start" Nov 1 00:42:30.868421 kubelet[1703]: I1101 00:42:30.868407 1703 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:30.868516 kubelet[1703]: I1101 00:42:30.868506 1703 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:30.876018 kubelet[1703]: I1101 00:42:30.875974 1703 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:42:30.876432 kubelet[1703]: I1101 00:42:30.876409 1703 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:30.876509 kubelet[1703]: I1101 00:42:30.876434 1703 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:30.877660 kubelet[1703]: I1101 00:42:30.877642 1703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:30.878482 kubelet[1703]: E1101 00:42:30.878465 1703 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:30.878747 kubelet[1703]: E1101 00:42:30.878725 1703 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-14edb40b39\" not found" Nov 1 00:42:30.962822 kubelet[1703]: E1101 00:42:30.960771 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:30.963445 kubelet[1703]: E1101 00:42:30.961418 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:30.964727 kubelet[1703]: E1101 00:42:30.964705 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:30.978048 kubelet[1703]: I1101 00:42:30.978005 1703 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:30.978509 kubelet[1703]: E1101 00:42:30.978476 1703 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.181.132:6443/api/v1/nodes\": dial tcp 64.23.181.132:6443: connect: connection refused" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022171 kubelet[1703]: I1101 00:42:31.022117 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022171 kubelet[1703]: I1101 00:42:31.022172 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022491 kubelet[1703]: I1101 00:42:31.022202 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022491 kubelet[1703]: I1101 00:42:31.022227 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022491 kubelet[1703]: I1101 00:42:31.022256 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/853772f4309aef23fea984ca12df96e7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-14edb40b39\" (UID: \"853772f4309aef23fea984ca12df96e7\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022491 kubelet[1703]: I1101 00:42:31.022310 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022491 kubelet[1703]: I1101 00:42:31.022337 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022674 kubelet[1703]: I1101 00:42:31.022363 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.022674 kubelet[1703]: I1101 00:42:31.022392 1703 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.023434 kubelet[1703]: E1101 00:42:31.023392 1703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.181.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-14edb40b39?timeout=10s\": dial tcp 64.23.181.132:6443: connect: connection refused" interval="400ms" Nov 1 00:42:31.180425 kubelet[1703]: I1101 00:42:31.180387 1703 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.181255 kubelet[1703]: E1101 00:42:31.181204 1703 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.181.132:6443/api/v1/nodes\": dial tcp 64.23.181.132:6443: connect: connection refused" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.265066 kubelet[1703]: E1101 00:42:31.264384 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.265428 kubelet[1703]: E1101 00:42:31.265402 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.266048 kubelet[1703]: E1101 00:42:31.264397 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.267225 env[1294]: time="2025-11-01T00:42:31.267036963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-14edb40b39,Uid:7a07ba0fd2630b4f933fbace74919737,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:31.267225 env[1294]: time="2025-11-01T00:42:31.267100452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-14edb40b39,Uid:810e5591256cf4ac73647e20842af51b,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:31.267756 env[1294]: time="2025-11-01T00:42:31.267501157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-14edb40b39,Uid:853772f4309aef23fea984ca12df96e7,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:31.424441 kubelet[1703]: E1101 00:42:31.424382 1703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.181.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-14edb40b39?timeout=10s\": dial tcp 64.23.181.132:6443: connect: connection refused" interval="800ms" Nov 1 00:42:31.583531 kubelet[1703]: I1101 00:42:31.583392 1703 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.584165 kubelet[1703]: E1101 00:42:31.584126 1703 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.181.132:6443/api/v1/nodes\": dial tcp 64.23.181.132:6443: connect: connection refused" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:31.619355 kubelet[1703]: W1101 00:42:31.619200 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.181.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-14edb40b39&limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:31.619355 kubelet[1703]: E1101 00:42:31.619304 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.181.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-14edb40b39&limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:31.680197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553293650.mount: Deactivated successfully. Nov 1 00:42:31.684340 env[1294]: time="2025-11-01T00:42:31.684288863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.686501 env[1294]: time="2025-11-01T00:42:31.686459300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.689054 env[1294]: time="2025-11-01T00:42:31.689014826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.690251 env[1294]: time="2025-11-01T00:42:31.690201844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.691170 env[1294]: time="2025-11-01T00:42:31.691141538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.692930 env[1294]: time="2025-11-01T00:42:31.692888994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.693862 env[1294]: time="2025-11-01T00:42:31.693832002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.697897 env[1294]: time="2025-11-01T00:42:31.697846308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.699660 env[1294]: time="2025-11-01T00:42:31.699605750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.700945 env[1294]: time="2025-11-01T00:42:31.700912932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.701829 env[1294]: time="2025-11-01T00:42:31.701803288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.702494 env[1294]: time="2025-11-01T00:42:31.702470438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.737416 env[1294]: time="2025-11-01T00:42:31.731060474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.737416 env[1294]: time="2025-11-01T00:42:31.731259900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.737416 env[1294]: time="2025-11-01T00:42:31.731271646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.737416 env[1294]: time="2025-11-01T00:42:31.731439267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46ba40fb7ff79f161408632d61ce87fe63baf76f2cbf097087fadf72e8dcbd6e pid=1742 runtime=io.containerd.runc.v2 Nov 1 00:42:31.758421 env[1294]: time="2025-11-01T00:42:31.757011001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.758421 env[1294]: time="2025-11-01T00:42:31.757108984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.758421 env[1294]: time="2025-11-01T00:42:31.757144866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.758421 env[1294]: time="2025-11-01T00:42:31.757320963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db1d05d8bc34de7dae73e127341cfc87241ed3c21b55b88d87b43187437962d pid=1765 runtime=io.containerd.runc.v2 Nov 1 00:42:31.775021 env[1294]: time="2025-11-01T00:42:31.770492044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:31.775021 env[1294]: time="2025-11-01T00:42:31.770614262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:31.775021 env[1294]: time="2025-11-01T00:42:31.770648210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:31.775021 env[1294]: time="2025-11-01T00:42:31.770847693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3517ce2a4a8a421c9badc235f0a888aa9c00f27831ad02f03256aa09d664d908 pid=1785 runtime=io.containerd.runc.v2 Nov 1 00:42:31.828421 kubelet[1703]: W1101 00:42:31.828363 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.181.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:31.828595 kubelet[1703]: E1101 00:42:31.828436 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.181.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:31.882944 env[1294]: time="2025-11-01T00:42:31.881321952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-14edb40b39,Uid:7a07ba0fd2630b4f933fbace74919737,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db1d05d8bc34de7dae73e127341cfc87241ed3c21b55b88d87b43187437962d\"" Nov 1 00:42:31.883113 kubelet[1703]: E1101 00:42:31.882718 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.885033 env[1294]: time="2025-11-01T00:42:31.884992411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-14edb40b39,Uid:810e5591256cf4ac73647e20842af51b,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ba40fb7ff79f161408632d61ce87fe63baf76f2cbf097087fadf72e8dcbd6e\"" Nov 1 00:42:31.886259 env[1294]: time="2025-11-01T00:42:31.886227951Z" level=info msg="CreateContainer within sandbox \"1db1d05d8bc34de7dae73e127341cfc87241ed3c21b55b88d87b43187437962d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:42:31.886925 kubelet[1703]: E1101 00:42:31.886783 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.888660 env[1294]: time="2025-11-01T00:42:31.888626569Z" level=info msg="CreateContainer within sandbox \"46ba40fb7ff79f161408632d61ce87fe63baf76f2cbf097087fadf72e8dcbd6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:42:31.897199 env[1294]: time="2025-11-01T00:42:31.897151988Z" level=info msg="CreateContainer within sandbox \"1db1d05d8bc34de7dae73e127341cfc87241ed3c21b55b88d87b43187437962d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a1c39504846d59c11d48feec0d2a87856e1242a4bbe3edb527410b0208bf80e\"" Nov 1 00:42:31.898052 env[1294]: time="2025-11-01T00:42:31.898016399Z" level=info msg="StartContainer for \"1a1c39504846d59c11d48feec0d2a87856e1242a4bbe3edb527410b0208bf80e\"" Nov 1 00:42:31.901907 env[1294]: time="2025-11-01T00:42:31.901863760Z" level=info msg="CreateContainer within sandbox \"46ba40fb7ff79f161408632d61ce87fe63baf76f2cbf097087fadf72e8dcbd6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c158b3b858d78a80413232ac08ae2f9178c088fc04460843e31f7d7ef60f0d7d\"" Nov 1 00:42:31.902777 env[1294]: time="2025-11-01T00:42:31.902749380Z" level=info msg="StartContainer for \"c158b3b858d78a80413232ac08ae2f9178c088fc04460843e31f7d7ef60f0d7d\"" Nov 1 00:42:31.905459 env[1294]: time="2025-11-01T00:42:31.905427098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-14edb40b39,Uid:853772f4309aef23fea984ca12df96e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3517ce2a4a8a421c9badc235f0a888aa9c00f27831ad02f03256aa09d664d908\"" Nov 1 00:42:31.906349 kubelet[1703]: E1101 00:42:31.906324 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:31.907792 env[1294]: time="2025-11-01T00:42:31.907760193Z" level=info msg="CreateContainer within sandbox \"3517ce2a4a8a421c9badc235f0a888aa9c00f27831ad02f03256aa09d664d908\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:42:31.916622 env[1294]: time="2025-11-01T00:42:31.916567620Z" level=info msg="CreateContainer within sandbox \"3517ce2a4a8a421c9badc235f0a888aa9c00f27831ad02f03256aa09d664d908\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d1c3dea2ed4c0f0fb6b734260afd071958404f7ebc8a76fc44c784a60ab5770\"" Nov 1 00:42:31.917194 env[1294]: time="2025-11-01T00:42:31.917162124Z" level=info msg="StartContainer for \"8d1c3dea2ed4c0f0fb6b734260afd071958404f7ebc8a76fc44c784a60ab5770\"" Nov 1 00:42:32.033548 env[1294]: time="2025-11-01T00:42:32.033488342Z" level=info msg="StartContainer for \"c158b3b858d78a80413232ac08ae2f9178c088fc04460843e31f7d7ef60f0d7d\" returns successfully" Nov 1 00:42:32.040820 env[1294]: time="2025-11-01T00:42:32.040767330Z" level=info msg="StartContainer for \"1a1c39504846d59c11d48feec0d2a87856e1242a4bbe3edb527410b0208bf80e\" returns successfully" Nov 1 00:42:32.060370 env[1294]: time="2025-11-01T00:42:32.060320108Z" level=info msg="StartContainer for \"8d1c3dea2ed4c0f0fb6b734260afd071958404f7ebc8a76fc44c784a60ab5770\" returns successfully" Nov 1 00:42:32.175128 kubelet[1703]: W1101 00:42:32.174956 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.181.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:32.175128 kubelet[1703]: E1101 00:42:32.175040 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.181.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:32.201887 kubelet[1703]: W1101 00:42:32.201809 1703 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.181.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.181.132:6443: connect: connection refused Nov 1 00:42:32.202069 kubelet[1703]: E1101 00:42:32.201890 1703 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.181.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.181.132:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:32.225820 kubelet[1703]: E1101 00:42:32.225770 1703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.181.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-14edb40b39?timeout=10s\": dial tcp 64.23.181.132:6443: connect: connection refused" interval="1.6s" Nov 1 00:42:32.386132 kubelet[1703]: I1101 00:42:32.386092 1703 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:32.386453 kubelet[1703]: E1101 00:42:32.386428 1703 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.181.132:6443/api/v1/nodes\": dial tcp 64.23.181.132:6443: connect: connection refused" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:32.870161 kubelet[1703]: E1101 00:42:32.870124 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:32.870674 kubelet[1703]: E1101 00:42:32.870268 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:32.872149 kubelet[1703]: E1101 00:42:32.872120 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:32.872268 kubelet[1703]: E1101 00:42:32.872255 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:32.876196 kubelet[1703]: E1101 00:42:32.876166 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:32.876390 kubelet[1703]: E1101 00:42:32.876370 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:33.877693 kubelet[1703]: E1101 00:42:33.877649 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:33.878294 kubelet[1703]: E1101 00:42:33.877830 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:33.878294 kubelet[1703]: E1101 00:42:33.878103 1703 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:33.878294 kubelet[1703]: E1101 00:42:33.878201 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:33.988135 kubelet[1703]: I1101 00:42:33.988104 1703 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.036226 kubelet[1703]: E1101 00:42:34.036182 1703 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-14edb40b39\" not found" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.126396 kubelet[1703]: E1101 00:42:34.126279 1703 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-14edb40b39.1873bb3d355f1425 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-14edb40b39,UID:ci-3510.3.8-n-14edb40b39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-14edb40b39,},FirstTimestamp:2025-11-01 00:42:30.794818597 +0000 UTC m=+0.491434018,LastTimestamp:2025-11-01 00:42:30.794818597 +0000 UTC m=+0.491434018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-14edb40b39,}" Nov 1 00:42:34.221301 kubelet[1703]: I1101 00:42:34.221179 1703 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.221579 kubelet[1703]: I1101 00:42:34.221558 1703 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.243878 kubelet[1703]: E1101 00:42:34.243817 1703 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-14edb40b39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.244233 kubelet[1703]: I1101 00:42:34.244174 1703 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.251271 kubelet[1703]: E1101 00:42:34.251231 1703 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.251518 kubelet[1703]: I1101 00:42:34.251494 1703 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.254255 kubelet[1703]: E1101 00:42:34.254221 1703 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.798819 kubelet[1703]: I1101 00:42:34.798753 1703 apiserver.go:52] "Watching apiserver" Nov 1 00:42:34.822347 kubelet[1703]: I1101 00:42:34.822285 1703 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:34.878008 kubelet[1703]: I1101 00:42:34.877976 1703 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.881006 kubelet[1703]: E1101 00:42:34.880970 1703 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:34.881396 kubelet[1703]: E1101 00:42:34.881376 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:36.003853 kubelet[1703]: I1101 00:42:36.003806 1703 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:36.013637 kubelet[1703]: W1101 00:42:36.013601 1703 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:42:36.014295 kubelet[1703]: E1101 00:42:36.014270 1703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:36.292280 systemd[1]: Reloading. Nov 1 00:42:36.379505 /usr/lib/systemd/system-generators/torcx-generator[1986]: time="2025-11-01T00:42:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:36.387439 /usr/lib/systemd/system-generators/torcx-generator[1986]: time="2025-11-01T00:42:36Z" level=info msg="torcx already run" Nov 1 00:42:36.495862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:36.496112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:36.522271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:36.640829 systemd[1]: Stopping kubelet.service... Nov 1 00:42:36.662713 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:42:36.663226 systemd[1]: Stopped kubelet.service. Nov 1 00:42:36.666369 systemd[1]: Starting kubelet.service... Nov 1 00:42:37.779779 systemd[1]: Started kubelet.service. Nov 1 00:42:37.871353 kubelet[2047]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:37.872238 kubelet[2047]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:37.872379 kubelet[2047]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:37.875096 kubelet[2047]: I1101 00:42:37.875008 2047 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:37.892290 kubelet[2047]: I1101 00:42:37.892254 2047 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:42:37.892478 kubelet[2047]: I1101 00:42:37.892465 2047 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:37.894518 kubelet[2047]: I1101 00:42:37.894491 2047 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:42:37.902598 kubelet[2047]: I1101 00:42:37.902568 2047 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:42:37.921147 sudo[2061]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:42:37.921552 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:42:37.930309 kubelet[2047]: I1101 00:42:37.930212 2047 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:37.935408 kubelet[2047]: E1101 00:42:37.935361 2047 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:37.935607 kubelet[2047]: I1101 00:42:37.935592 2047 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:37.942459 kubelet[2047]: I1101 00:42:37.942421 2047 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:37.943202 kubelet[2047]: I1101 00:42:37.943163 2047 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:37.944799 kubelet[2047]: I1101 00:42:37.943339 2047 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-14edb40b39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:42:37.945004 kubelet[2047]: I1101 00:42:37.944987 2047 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:37.945104 kubelet[2047]: I1101 00:42:37.945063 2047 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:42:37.945227 kubelet[2047]: I1101 00:42:37.945217 2047 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:37.945433 kubelet[2047]: I1101 00:42:37.945422 2047 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:42:37.945532 kubelet[2047]: I1101 00:42:37.945520 2047 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:37.949296 kubelet[2047]: I1101 00:42:37.949277 2047 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:42:37.949471 kubelet[2047]: I1101 00:42:37.949458 2047 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:37.976245 kubelet[2047]: I1101 00:42:37.976217 2047 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:37.976870 kubelet[2047]: I1101 00:42:37.976851 2047 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:42:37.979513 kubelet[2047]: I1101 00:42:37.978924 2047 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:37.979771 kubelet[2047]: I1101 00:42:37.979754 2047 server.go:1287] "Started kubelet" Nov 1 00:42:37.980125 kubelet[2047]: I1101 00:42:37.978322 2047 apiserver.go:52] "Watching apiserver" Nov 1 00:42:37.991502 kubelet[2047]: I1101 00:42:37.991475 2047 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:38.012136 kubelet[2047]: I1101 00:42:38.012060 2047 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:38.019055 kubelet[2047]: I1101 00:42:38.012838 2047 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:38.019511 kubelet[2047]: I1101 00:42:38.013433 2047 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:38.020264 kubelet[2047]: E1101 00:42:38.015399 2047 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:38.020374 kubelet[2047]: I1101 00:42:38.016119 2047 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:38.020738 kubelet[2047]: I1101 00:42:38.016130 2047 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:38.020943 kubelet[2047]: I1101 00:42:38.020931 2047 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:38.021380 kubelet[2047]: I1101 00:42:38.021366 2047 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:42:38.022544 kubelet[2047]: I1101 00:42:38.022526 2047 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:38.023501 kubelet[2047]: I1101 00:42:38.023472 2047 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:38.029343 kubelet[2047]: I1101 00:42:38.029319 2047 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:42:38.029493 kubelet[2047]: I1101 00:42:38.029481 2047 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:42:38.050290 kubelet[2047]: I1101 00:42:38.050246 2047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:38.051623 kubelet[2047]: I1101 00:42:38.051595 2047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:38.051782 kubelet[2047]: I1101 00:42:38.051770 2047 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:42:38.051897 kubelet[2047]: I1101 00:42:38.051886 2047 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:38.051960 kubelet[2047]: I1101 00:42:38.051950 2047 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:42:38.052117 kubelet[2047]: E1101 00:42:38.052054 2047 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:38.126593 kubelet[2047]: I1101 00:42:38.126558 2047 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:38.126817 kubelet[2047]: I1101 00:42:38.126787 2047 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:38.126934 kubelet[2047]: I1101 00:42:38.126921 2047 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:38.127265 kubelet[2047]: I1101 00:42:38.127242 2047 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:42:38.127432 kubelet[2047]: I1101 00:42:38.127400 2047 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:42:38.127501 kubelet[2047]: I1101 00:42:38.127491 2047 policy_none.go:49] "None policy: Start" Nov 1 00:42:38.127568 kubelet[2047]: I1101 00:42:38.127558 2047 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:38.127649 kubelet[2047]: I1101 00:42:38.127636 2047 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:38.127887 kubelet[2047]: I1101 00:42:38.127873 2047 state_mem.go:75] "Updated machine memory state" Nov 1 00:42:38.132326 kubelet[2047]: I1101 00:42:38.132292 2047 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:42:38.132939 kubelet[2047]: I1101 00:42:38.132918 2047 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:38.133230 kubelet[2047]: I1101 00:42:38.133174 2047 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:38.134529 kubelet[2047]: E1101 00:42:38.134340 2047 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:38.137254 kubelet[2047]: I1101 00:42:38.135222 2047 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:38.159652 kubelet[2047]: I1101 00:42:38.159618 2047 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.165943 kubelet[2047]: I1101 00:42:38.165913 2047 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.175445 kubelet[2047]: W1101 00:42:38.174698 2047 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:42:38.176945 kubelet[2047]: W1101 00:42:38.176905 2047 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:42:38.206236 kubelet[2047]: I1101 00:42:38.206155 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" podStartSLOduration=2.206135578 podStartE2EDuration="2.206135578s" podCreationTimestamp="2025-11-01 00:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:38.19613741 +0000 UTC m=+0.399892162" watchObservedRunningTime="2025-11-01 00:42:38.206135578 +0000 UTC m=+0.409890322" Nov 1 00:42:38.206453 kubelet[2047]: I1101 00:42:38.206300 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" podStartSLOduration=0.206292524 podStartE2EDuration="206.292524ms" podCreationTimestamp="2025-11-01 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:38.205953338 +0000 UTC m=+0.409708088" watchObservedRunningTime="2025-11-01 00:42:38.206292524 +0000 UTC m=+0.410047268" Nov 1 00:42:38.221622 kubelet[2047]: I1101 00:42:38.221586 2047 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:38.227584 kubelet[2047]: I1101 00:42:38.227532 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227728 kubelet[2047]: I1101 00:42:38.227594 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227728 kubelet[2047]: I1101 00:42:38.227624 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227728 kubelet[2047]: I1101 00:42:38.227650 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/853772f4309aef23fea984ca12df96e7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-14edb40b39\" (UID: \"853772f4309aef23fea984ca12df96e7\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227728 kubelet[2047]: I1101 00:42:38.227673 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227728 kubelet[2047]: I1101 00:42:38.227699 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/810e5591256cf4ac73647e20842af51b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-14edb40b39\" (UID: \"810e5591256cf4ac73647e20842af51b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227899 kubelet[2047]: I1101 00:42:38.227722 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227899 kubelet[2047]: I1101 00:42:38.227743 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.227899 kubelet[2047]: I1101 00:42:38.227771 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a07ba0fd2630b4f933fbace74919737-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-14edb40b39\" (UID: \"7a07ba0fd2630b4f933fbace74919737\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.238061 kubelet[2047]: I1101 00:42:38.238025 2047 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.248638 kubelet[2047]: I1101 00:42:38.248596 2047 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.248795 kubelet[2047]: I1101 00:42:38.248679 2047 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-14edb40b39" Nov 1 00:42:38.460049 kubelet[2047]: E1101 00:42:38.459943 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:38.478109 kubelet[2047]: E1101 00:42:38.478047 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:38.480236 kubelet[2047]: E1101 00:42:38.480203 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:38.716751 sudo[2061]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:39.087325 kubelet[2047]: E1101 00:42:39.087289 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:39.088124 kubelet[2047]: E1101 00:42:39.088097 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:39.103715 kubelet[2047]: I1101 00:42:39.103652 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-14edb40b39" podStartSLOduration=1.10363264 podStartE2EDuration="1.10363264s" podCreationTimestamp="2025-11-01 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:38.219207074 +0000 UTC m=+0.422961826" watchObservedRunningTime="2025-11-01 00:42:39.10363264 +0000 UTC m=+1.307387392" Nov 1 00:42:39.165018 kubelet[2047]: E1101 00:42:39.164974 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:40.094305 kubelet[2047]: E1101 00:42:40.092280 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:40.094305 kubelet[2047]: E1101 00:42:40.093395 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:40.437833 sudo[1427]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:40.441650 sshd[1421]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:40.446403 systemd-logind[1286]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:42:40.448259 systemd[1]: sshd@4-64.23.181.132:22-139.178.89.65:57892.service: Deactivated successfully. Nov 1 00:42:40.449153 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:42:40.450944 systemd-logind[1286]: Removed session 5. Nov 1 00:42:42.559669 kubelet[2047]: I1101 00:42:42.559640 2047 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:42:42.560573 env[1294]: time="2025-11-01T00:42:42.560531767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:42:42.561182 kubelet[2047]: I1101 00:42:42.561162 2047 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:42:43.228211 kubelet[2047]: I1101 00:42:43.228141 2047 status_manager.go:890] "Failed to get status for pod" podUID="029e7d07-d3c2-4ec4-90e5-48d429c8a003" pod="kube-system/kube-proxy-7bm7m" err="pods \"kube-proxy-7bm7m\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" Nov 1 00:42:43.229470 kubelet[2047]: W1101 00:42:43.229440 2047 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:43.229735 kubelet[2047]: W1101 00:42:43.229711 2047 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:43.229794 kubelet[2047]: E1101 00:42:43.229752 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:43.229875 kubelet[2047]: E1101 00:42:43.229727 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:43.229946 kubelet[2047]: W1101 00:42:43.229862 2047 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:43.230052 kubelet[2047]: E1101 00:42:43.230030 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:43.230140 kubelet[2047]: W1101 00:42:43.229640 2047 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:43.230238 kubelet[2047]: E1101 00:42:43.230223 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:43.230320 kubelet[2047]: W1101 00:42:43.229688 2047 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:43.230434 kubelet[2047]: E1101 00:42:43.230411 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:43.259090 kubelet[2047]: I1101 00:42:43.258984 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.259384 kubelet[2047]: I1101 00:42:43.259362 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg2rg\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.259529 kubelet[2047]: I1101 00:42:43.259515 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-proxy\") pod \"kube-proxy-7bm7m\" (UID: \"029e7d07-d3c2-4ec4-90e5-48d429c8a003\") " pod="kube-system/kube-proxy-7bm7m" Nov 1 00:42:43.259646 kubelet[2047]: I1101 00:42:43.259633 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-cgroup\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.259755 kubelet[2047]: I1101 00:42:43.259742 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-xtables-lock\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.259862 kubelet[2047]: I1101 00:42:43.259849 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-net\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.259962 kubelet[2047]: I1101 00:42:43.259949 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029e7d07-d3c2-4ec4-90e5-48d429c8a003-lib-modules\") pod \"kube-proxy-7bm7m\" (UID: \"029e7d07-d3c2-4ec4-90e5-48d429c8a003\") " pod="kube-system/kube-proxy-7bm7m" Nov 1 00:42:43.260103 kubelet[2047]: I1101 00:42:43.260051 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-bpf-maps\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260195 kubelet[2047]: I1101 00:42:43.260183 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a87caee-41be-4140-b973-d086be9585f5-clustermesh-secrets\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260312 kubelet[2047]: I1101 00:42:43.260299 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-kernel\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260414 kubelet[2047]: I1101 00:42:43.260401 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d28hk\" (UniqueName: \"kubernetes.io/projected/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-api-access-d28hk\") pod \"kube-proxy-7bm7m\" (UID: \"029e7d07-d3c2-4ec4-90e5-48d429c8a003\") " pod="kube-system/kube-proxy-7bm7m" Nov 1 00:42:43.260511 kubelet[2047]: I1101 00:42:43.260499 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-etc-cni-netd\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260625 kubelet[2047]: I1101 00:42:43.260606 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-lib-modules\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260742 kubelet[2047]: I1101 00:42:43.260715 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029e7d07-d3c2-4ec4-90e5-48d429c8a003-xtables-lock\") pod \"kube-proxy-7bm7m\" (UID: \"029e7d07-d3c2-4ec4-90e5-48d429c8a003\") " pod="kube-system/kube-proxy-7bm7m" Nov 1 00:42:43.260846 kubelet[2047]: I1101 00:42:43.260830 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-hostproc\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260918 kubelet[2047]: I1101 00:42:43.260906 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cni-path\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.260992 kubelet[2047]: I1101 00:42:43.260978 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a87caee-41be-4140-b973-d086be9585f5-cilium-config-path\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.261068 kubelet[2047]: I1101 00:42:43.261055 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-run\") pod \"cilium-g2dmx\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " pod="kube-system/cilium-g2dmx" Nov 1 00:42:43.663663 kubelet[2047]: I1101 00:42:43.663619 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1567bd-bf97-4078-83dd-335d5ef0941c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-d4fp8\" (UID: \"2b1567bd-bf97-4078-83dd-335d5ef0941c\") " pod="kube-system/cilium-operator-6c4d7847fc-d4fp8" Nov 1 00:42:43.664292 kubelet[2047]: I1101 00:42:43.664266 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxp9m\" (UniqueName: \"kubernetes.io/projected/2b1567bd-bf97-4078-83dd-335d5ef0941c-kube-api-access-dxp9m\") pod \"cilium-operator-6c4d7847fc-d4fp8\" (UID: \"2b1567bd-bf97-4078-83dd-335d5ef0941c\") " pod="kube-system/cilium-operator-6c4d7847fc-d4fp8" Nov 1 00:42:43.871864 kubelet[2047]: E1101 00:42:43.871826 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:44.099668 kubelet[2047]: E1101 00:42:44.099631 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:44.193970 kubelet[2047]: I1101 00:42:44.193931 2047 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:42:44.363458 kubelet[2047]: E1101 00:42:44.362972 2047 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Nov 1 00:42:44.363716 kubelet[2047]: E1101 00:42:44.363679 2047 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-g2dmx: failed to sync secret cache: timed out waiting for the condition Nov 1 00:42:44.363908 kubelet[2047]: E1101 00:42:44.363156 2047 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.364055 kubelet[2047]: E1101 00:42:44.364033 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls podName:6a87caee-41be-4140-b973-d086be9585f5 nodeName:}" failed. No retries permitted until 2025-11-01 00:42:44.8638888 +0000 UTC m=+7.067643554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls") pod "cilium-g2dmx" (UID: "6a87caee-41be-4140-b973-d086be9585f5") : failed to sync secret cache: timed out waiting for the condition Nov 1 00:42:44.365224 kubelet[2047]: E1101 00:42:44.365198 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-proxy podName:029e7d07-d3c2-4ec4-90e5-48d429c8a003 nodeName:}" failed. No retries permitted until 2025-11-01 00:42:44.86516467 +0000 UTC m=+7.068919418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-proxy") pod "kube-proxy-7bm7m" (UID: "029e7d07-d3c2-4ec4-90e5-48d429c8a003") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.375245 kubelet[2047]: E1101 00:42:44.375166 2047 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.375567 kubelet[2047]: E1101 00:42:44.375547 2047 projected.go:194] Error preparing data for projected volume kube-api-access-d28hk for pod kube-system/kube-proxy-7bm7m: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.375869 kubelet[2047]: E1101 00:42:44.375502 2047 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.376011 kubelet[2047]: E1101 00:42:44.375958 2047 projected.go:194] Error preparing data for projected volume kube-api-access-qg2rg for pod kube-system/cilium-g2dmx: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.376200 kubelet[2047]: E1101 00:42:44.376175 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-api-access-d28hk podName:029e7d07-d3c2-4ec4-90e5-48d429c8a003 nodeName:}" failed. No retries permitted until 2025-11-01 00:42:44.87584723 +0000 UTC m=+7.079601974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d28hk" (UniqueName: "kubernetes.io/projected/029e7d07-d3c2-4ec4-90e5-48d429c8a003-kube-api-access-d28hk") pod "kube-proxy-7bm7m" (UID: "029e7d07-d3c2-4ec4-90e5-48d429c8a003") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.376936 kubelet[2047]: E1101 00:42:44.376895 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg podName:6a87caee-41be-4140-b973-d086be9585f5 nodeName:}" failed. No retries permitted until 2025-11-01 00:42:44.876853631 +0000 UTC m=+7.080608367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qg2rg" (UniqueName: "kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg") pod "cilium-g2dmx" (UID: "6a87caee-41be-4140-b973-d086be9585f5") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:44.821470 kubelet[2047]: E1101 00:42:44.821431 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:44.823027 env[1294]: time="2025-11-01T00:42:44.822695023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d4fp8,Uid:2b1567bd-bf97-4078-83dd-335d5ef0941c,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:44.852153 env[1294]: time="2025-11-01T00:42:44.851984453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:44.852153 env[1294]: time="2025-11-01T00:42:44.852039345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:44.852153 env[1294]: time="2025-11-01T00:42:44.852050056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:44.857358 env[1294]: time="2025-11-01T00:42:44.852372928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724 pid=2125 runtime=io.containerd.runc.v2 Nov 1 00:42:44.928783 env[1294]: time="2025-11-01T00:42:44.928735473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d4fp8,Uid:2b1567bd-bf97-4078-83dd-335d5ef0941c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\"" Nov 1 00:42:44.930282 kubelet[2047]: E1101 00:42:44.929735 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:44.931384 env[1294]: time="2025-11-01T00:42:44.931351594Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:42:45.011243 kubelet[2047]: E1101 00:42:45.011189 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:45.012971 env[1294]: time="2025-11-01T00:42:45.012912735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bm7m,Uid:029e7d07-d3c2-4ec4-90e5-48d429c8a003,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:45.022963 kubelet[2047]: E1101 00:42:45.022917 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:45.024024 env[1294]: time="2025-11-01T00:42:45.023966687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2dmx,Uid:6a87caee-41be-4140-b973-d086be9585f5,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:45.034803 env[1294]: time="2025-11-01T00:42:45.034719720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.035112 env[1294]: time="2025-11-01T00:42:45.035036465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.035309 env[1294]: time="2025-11-01T00:42:45.035263245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.035777 env[1294]: time="2025-11-01T00:42:45.035713516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7913d442800683cb088dab78d3327e7fef82ba16ca3957a81815cdb91e6e02 pid=2168 runtime=io.containerd.runc.v2 Nov 1 00:42:45.044172 env[1294]: time="2025-11-01T00:42:45.043327021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.044460 env[1294]: time="2025-11-01T00:42:45.044401627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.044612 env[1294]: time="2025-11-01T00:42:45.044585375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.045281 env[1294]: time="2025-11-01T00:42:45.045227917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6 pid=2186 runtime=io.containerd.runc.v2 Nov 1 00:42:45.100808 env[1294]: time="2025-11-01T00:42:45.100469554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7bm7m,Uid:029e7d07-d3c2-4ec4-90e5-48d429c8a003,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a7913d442800683cb088dab78d3327e7fef82ba16ca3957a81815cdb91e6e02\"" Nov 1 00:42:45.102889 kubelet[2047]: E1101 00:42:45.102750 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:45.108928 env[1294]: time="2025-11-01T00:42:45.108873834Z" level=info msg="CreateContainer within sandbox \"5a7913d442800683cb088dab78d3327e7fef82ba16ca3957a81815cdb91e6e02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:42:45.121977 env[1294]: time="2025-11-01T00:42:45.121930242Z" level=info msg="CreateContainer within sandbox \"5a7913d442800683cb088dab78d3327e7fef82ba16ca3957a81815cdb91e6e02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b22cc15cc6f0b4f7fb5a1f335a554f0903d3856df55304f7be4e9c13061f612\"" Nov 1 00:42:45.123020 env[1294]: time="2025-11-01T00:42:45.122986564Z" level=info msg="StartContainer for \"6b22cc15cc6f0b4f7fb5a1f335a554f0903d3856df55304f7be4e9c13061f612\"" Nov 1 00:42:45.125365 env[1294]: time="2025-11-01T00:42:45.125323692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2dmx,Uid:6a87caee-41be-4140-b973-d086be9585f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\"" Nov 1 00:42:45.126331 kubelet[2047]: E1101 00:42:45.126013 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:45.236874 env[1294]: time="2025-11-01T00:42:45.236816680Z" level=info msg="StartContainer for \"6b22cc15cc6f0b4f7fb5a1f335a554f0903d3856df55304f7be4e9c13061f612\" returns successfully" Nov 1 00:42:46.018009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306794362.mount: Deactivated successfully. Nov 1 00:42:46.118376 kubelet[2047]: E1101 00:42:46.117720 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:46.132108 kubelet[2047]: I1101 00:42:46.130752 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7bm7m" podStartSLOduration=3.130540047 podStartE2EDuration="3.130540047s" podCreationTimestamp="2025-11-01 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:46.130129157 +0000 UTC m=+8.333883911" watchObservedRunningTime="2025-11-01 00:42:46.130540047 +0000 UTC m=+8.334294801" Nov 1 00:42:46.341525 kubelet[2047]: E1101 00:42:46.341116 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:46.654560 kubelet[2047]: E1101 00:42:46.654427 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:46.771780 env[1294]: time="2025-11-01T00:42:46.771710291Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:46.775624 env[1294]: time="2025-11-01T00:42:46.775577907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:46.776849 env[1294]: time="2025-11-01T00:42:46.776803674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:46.777856 env[1294]: time="2025-11-01T00:42:46.777808657Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:42:46.780036 env[1294]: time="2025-11-01T00:42:46.779966401Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:42:46.781797 env[1294]: time="2025-11-01T00:42:46.781737218Z" level=info msg="CreateContainer within sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:42:46.794714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430386897.mount: Deactivated successfully. Nov 1 00:42:46.803136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262234194.mount: Deactivated successfully. Nov 1 00:42:46.806533 env[1294]: time="2025-11-01T00:42:46.806488237Z" level=info msg="CreateContainer within sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\"" Nov 1 00:42:46.808871 env[1294]: time="2025-11-01T00:42:46.808797878Z" level=info msg="StartContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\"" Nov 1 00:42:46.877767 env[1294]: time="2025-11-01T00:42:46.877710796Z" level=info msg="StartContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" returns successfully" Nov 1 00:42:47.120583 kubelet[2047]: E1101 00:42:47.120551 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:47.122956 kubelet[2047]: E1101 00:42:47.121526 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:47.123159 kubelet[2047]: E1101 00:42:47.121787 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:47.123270 kubelet[2047]: E1101 00:42:47.122249 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:47.189173 kubelet[2047]: I1101 00:42:47.189109 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-d4fp8" podStartSLOduration=2.340908847 podStartE2EDuration="4.189085341s" podCreationTimestamp="2025-11-01 00:42:43 +0000 UTC" firstStartedPulling="2025-11-01 00:42:44.930885926 +0000 UTC m=+7.134640658" lastFinishedPulling="2025-11-01 00:42:46.779062412 +0000 UTC m=+8.982817152" observedRunningTime="2025-11-01 00:42:47.158126754 +0000 UTC m=+9.361881507" watchObservedRunningTime="2025-11-01 00:42:47.189085341 +0000 UTC m=+9.392840088" Nov 1 00:42:48.130946 kubelet[2047]: E1101 00:42:48.130466 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:48.130946 kubelet[2047]: E1101 00:42:48.130495 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:51.753257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389910186.mount: Deactivated successfully. Nov 1 00:42:54.658144 env[1294]: time="2025-11-01T00:42:54.658087876Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.659874 env[1294]: time="2025-11-01T00:42:54.659825999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.661359 env[1294]: time="2025-11-01T00:42:54.661329179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.662589 env[1294]: time="2025-11-01T00:42:54.662555935Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:42:54.665029 env[1294]: time="2025-11-01T00:42:54.665001107Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:42:54.679185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033062830.mount: Deactivated successfully. Nov 1 00:42:54.687261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366525346.mount: Deactivated successfully. Nov 1 00:42:54.689719 env[1294]: time="2025-11-01T00:42:54.689678491Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\"" Nov 1 00:42:54.691861 env[1294]: time="2025-11-01T00:42:54.691818806Z" level=info msg="StartContainer for \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\"" Nov 1 00:42:54.765161 env[1294]: time="2025-11-01T00:42:54.764045081Z" level=info msg="StartContainer for \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\" returns successfully" Nov 1 00:42:54.794737 env[1294]: time="2025-11-01T00:42:54.794689024Z" level=info msg="shim disconnected" id=53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a Nov 1 00:42:54.795065 env[1294]: time="2025-11-01T00:42:54.795043801Z" level=warning msg="cleaning up after shim disconnected" id=53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a namespace=k8s.io Nov 1 00:42:54.795182 env[1294]: time="2025-11-01T00:42:54.795165568Z" level=info msg="cleaning up dead shim" Nov 1 00:42:54.804940 env[1294]: time="2025-11-01T00:42:54.804893291Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n" Nov 1 00:42:54.858129 update_engine[1287]: I1101 00:42:54.857674 1287 update_attempter.cc:509] Updating boot flags... Nov 1 00:42:55.155111 kubelet[2047]: E1101 00:42:55.154736 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:55.163845 env[1294]: time="2025-11-01T00:42:55.163793945Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:42:55.193196 env[1294]: time="2025-11-01T00:42:55.193143588Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\"" Nov 1 00:42:55.196131 env[1294]: time="2025-11-01T00:42:55.194129169Z" level=info msg="StartContainer for \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\"" Nov 1 00:42:55.254188 env[1294]: time="2025-11-01T00:42:55.254142195Z" level=info msg="StartContainer for \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\" returns successfully" Nov 1 00:42:55.266116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:55.266421 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:55.266708 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:42:55.272044 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:55.291677 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:55.303054 env[1294]: time="2025-11-01T00:42:55.303006690Z" level=info msg="shim disconnected" id=e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5 Nov 1 00:42:55.303329 env[1294]: time="2025-11-01T00:42:55.303307745Z" level=warning msg="cleaning up after shim disconnected" id=e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5 namespace=k8s.io Nov 1 00:42:55.303419 env[1294]: time="2025-11-01T00:42:55.303404162Z" level=info msg="cleaning up dead shim" Nov 1 00:42:55.313033 env[1294]: time="2025-11-01T00:42:55.312986254Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2571 runtime=io.containerd.runc.v2\n" Nov 1 00:42:55.675652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a-rootfs.mount: Deactivated successfully. Nov 1 00:42:56.159372 kubelet[2047]: E1101 00:42:56.159331 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:56.176311 env[1294]: time="2025-11-01T00:42:56.176129232Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:42:56.195392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282703816.mount: Deactivated successfully. Nov 1 00:42:56.204626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801375732.mount: Deactivated successfully. Nov 1 00:42:56.207952 env[1294]: time="2025-11-01T00:42:56.207848387Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\"" Nov 1 00:42:56.210241 env[1294]: time="2025-11-01T00:42:56.208816309Z" level=info msg="StartContainer for \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\"" Nov 1 00:42:56.280142 env[1294]: time="2025-11-01T00:42:56.279939309Z" level=info msg="StartContainer for \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\" returns successfully" Nov 1 00:42:56.314668 env[1294]: time="2025-11-01T00:42:56.314614066Z" level=info msg="shim disconnected" id=80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d Nov 1 00:42:56.314668 env[1294]: time="2025-11-01T00:42:56.314659728Z" level=warning msg="cleaning up after shim disconnected" id=80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d namespace=k8s.io Nov 1 00:42:56.314668 env[1294]: time="2025-11-01T00:42:56.314669207Z" level=info msg="cleaning up dead shim" Nov 1 00:42:56.325280 env[1294]: time="2025-11-01T00:42:56.325221252Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2627 runtime=io.containerd.runc.v2\n" Nov 1 00:42:57.162761 kubelet[2047]: E1101 00:42:57.162715 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:57.166571 env[1294]: time="2025-11-01T00:42:57.165128947Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:42:57.179460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397002877.mount: Deactivated successfully. Nov 1 00:42:57.194336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679291505.mount: Deactivated successfully. Nov 1 00:42:57.201036 env[1294]: time="2025-11-01T00:42:57.200672950Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\"" Nov 1 00:42:57.204337 env[1294]: time="2025-11-01T00:42:57.204290932Z" level=info msg="StartContainer for \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\"" Nov 1 00:42:57.255104 env[1294]: time="2025-11-01T00:42:57.255026797Z" level=info msg="StartContainer for \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\" returns successfully" Nov 1 00:42:57.276526 env[1294]: time="2025-11-01T00:42:57.276477656Z" level=info msg="shim disconnected" id=90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754 Nov 1 00:42:57.276815 env[1294]: time="2025-11-01T00:42:57.276794653Z" level=warning msg="cleaning up after shim disconnected" id=90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754 namespace=k8s.io Nov 1 00:42:57.276909 env[1294]: time="2025-11-01T00:42:57.276893737Z" level=info msg="cleaning up dead shim" Nov 1 00:42:57.286881 env[1294]: time="2025-11-01T00:42:57.286826315Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:42:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2684 runtime=io.containerd.runc.v2\n" Nov 1 00:42:58.171872 kubelet[2047]: E1101 00:42:58.171143 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:58.175887 env[1294]: time="2025-11-01T00:42:58.175834414Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:42:58.194479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858981506.mount: Deactivated successfully. Nov 1 00:42:58.205840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206987539.mount: Deactivated successfully. Nov 1 00:42:58.209754 env[1294]: time="2025-11-01T00:42:58.209708494Z" level=info msg="CreateContainer within sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\"" Nov 1 00:42:58.210369 env[1294]: time="2025-11-01T00:42:58.210342025Z" level=info msg="StartContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\"" Nov 1 00:42:58.286996 env[1294]: time="2025-11-01T00:42:58.286948480Z" level=info msg="StartContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" returns successfully" Nov 1 00:42:58.435713 kubelet[2047]: I1101 00:42:58.434613 2047 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:42:58.477358 kubelet[2047]: I1101 00:42:58.477312 2047 status_manager.go:890] "Failed to get status for pod" podUID="b9e8281f-e331-47d1-8836-8e198677f625" pod="kube-system/coredns-668d6bf9bc-jkhz7" err="pods \"coredns-668d6bf9bc-jkhz7\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" Nov 1 00:42:58.479170 kubelet[2047]: W1101 00:42:58.479128 2047 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.8-n-14edb40b39" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object Nov 1 00:42:58.479446 kubelet[2047]: E1101 00:42:58.479404 2047 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.8-n-14edb40b39\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-14edb40b39' and this object" logger="UnhandledError" Nov 1 00:42:58.503579 kubelet[2047]: I1101 00:42:58.503501 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fba01d-f737-418a-91a5-d16b3c359ce5-config-volume\") pod \"coredns-668d6bf9bc-4kvqm\" (UID: \"34fba01d-f737-418a-91a5-d16b3c359ce5\") " pod="kube-system/coredns-668d6bf9bc-4kvqm" Nov 1 00:42:58.503579 kubelet[2047]: I1101 00:42:58.503574 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2n86\" (UniqueName: \"kubernetes.io/projected/34fba01d-f737-418a-91a5-d16b3c359ce5-kube-api-access-h2n86\") pod \"coredns-668d6bf9bc-4kvqm\" (UID: \"34fba01d-f737-418a-91a5-d16b3c359ce5\") " pod="kube-system/coredns-668d6bf9bc-4kvqm" Nov 1 00:42:58.503848 kubelet[2047]: I1101 00:42:58.503595 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r52xz\" (UniqueName: \"kubernetes.io/projected/b9e8281f-e331-47d1-8836-8e198677f625-kube-api-access-r52xz\") pod \"coredns-668d6bf9bc-jkhz7\" (UID: \"b9e8281f-e331-47d1-8836-8e198677f625\") " pod="kube-system/coredns-668d6bf9bc-jkhz7" Nov 1 00:42:58.503848 kubelet[2047]: I1101 00:42:58.503629 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9e8281f-e331-47d1-8836-8e198677f625-config-volume\") pod \"coredns-668d6bf9bc-jkhz7\" (UID: \"b9e8281f-e331-47d1-8836-8e198677f625\") " pod="kube-system/coredns-668d6bf9bc-jkhz7" Nov 1 00:42:59.177199 kubelet[2047]: E1101 00:42:59.177164 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:42:59.202940 kubelet[2047]: I1101 00:42:59.202853 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g2dmx" podStartSLOduration=6.666788721 podStartE2EDuration="16.202829809s" podCreationTimestamp="2025-11-01 00:42:43 +0000 UTC" firstStartedPulling="2025-11-01 00:42:45.127312305 +0000 UTC m=+7.331067050" lastFinishedPulling="2025-11-01 00:42:54.663353393 +0000 UTC m=+16.867108138" observedRunningTime="2025-11-01 00:42:59.202341544 +0000 UTC m=+21.406096296" watchObservedRunningTime="2025-11-01 00:42:59.202829809 +0000 UTC m=+21.406584565" Nov 1 00:42:59.604965 kubelet[2047]: E1101 00:42:59.604912 2047 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:59.605313 kubelet[2047]: E1101 00:42:59.605170 2047 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:59.605385 kubelet[2047]: E1101 00:42:59.605280 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34fba01d-f737-418a-91a5-d16b3c359ce5-config-volume podName:34fba01d-f737-418a-91a5-d16b3c359ce5 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:00.105256984 +0000 UTC m=+22.309011716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34fba01d-f737-418a-91a5-d16b3c359ce5-config-volume") pod "coredns-668d6bf9bc-4kvqm" (UID: "34fba01d-f737-418a-91a5-d16b3c359ce5") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:42:59.605385 kubelet[2047]: E1101 00:42:59.605366 2047 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9e8281f-e331-47d1-8836-8e198677f625-config-volume podName:b9e8281f-e331-47d1-8836-8e198677f625 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:00.10535076 +0000 UTC m=+22.309105492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9e8281f-e331-47d1-8836-8e198677f625-config-volume") pod "coredns-668d6bf9bc-jkhz7" (UID: "b9e8281f-e331-47d1-8836-8e198677f625") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:00.180282 kubelet[2047]: E1101 00:43:00.180225 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:00.275348 kubelet[2047]: E1101 00:43:00.275290 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:00.277204 env[1294]: time="2025-11-01T00:43:00.276312259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jkhz7,Uid:b9e8281f-e331-47d1-8836-8e198677f625,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:00.279059 kubelet[2047]: E1101 00:43:00.278937 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:00.282237 env[1294]: time="2025-11-01T00:43:00.281816769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4kvqm,Uid:34fba01d-f737-418a-91a5-d16b3c359ce5,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:00.491417 systemd-networkd[1060]: cilium_host: Link UP Nov 1 00:43:00.491619 systemd-networkd[1060]: cilium_net: Link UP Nov 1 00:43:00.494337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:43:00.494459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:43:00.493646 systemd-networkd[1060]: cilium_net: Gained carrier Nov 1 00:43:00.493940 systemd-networkd[1060]: cilium_host: Gained carrier Nov 1 00:43:00.639278 systemd-networkd[1060]: cilium_vxlan: Link UP Nov 1 00:43:00.639285 systemd-networkd[1060]: cilium_vxlan: Gained carrier Nov 1 00:43:01.032110 kernel: NET: Registered PF_ALG protocol family Nov 1 00:43:01.143301 systemd-networkd[1060]: cilium_host: Gained IPv6LL Nov 1 00:43:01.184637 kubelet[2047]: E1101 00:43:01.184592 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:01.527360 systemd-networkd[1060]: cilium_net: Gained IPv6LL Nov 1 00:43:01.913557 systemd-networkd[1060]: cilium_vxlan: Gained IPv6LL Nov 1 00:43:01.924951 systemd-networkd[1060]: lxc_health: Link UP Nov 1 00:43:01.936828 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:43:01.936254 systemd-networkd[1060]: lxc_health: Gained carrier Nov 1 00:43:02.364399 systemd-networkd[1060]: lxcf504172ce464: Link UP Nov 1 00:43:02.384178 kernel: eth0: renamed from tmp9a379 Nov 1 00:43:02.393798 systemd-networkd[1060]: lxce7d75ee89ecc: Link UP Nov 1 00:43:02.411171 kernel: eth0: renamed from tmpec582 Nov 1 00:43:02.419861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf504172ce464: link becomes ready Nov 1 00:43:02.419110 systemd-networkd[1060]: lxcf504172ce464: Gained carrier Nov 1 00:43:02.425722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce7d75ee89ecc: link becomes ready Nov 1 00:43:02.425962 systemd-networkd[1060]: lxce7d75ee89ecc: Gained carrier Nov 1 00:43:03.024709 kubelet[2047]: E1101 00:43:03.024664 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:03.383438 systemd-networkd[1060]: lxc_health: Gained IPv6LL Nov 1 00:43:04.024364 systemd-networkd[1060]: lxce7d75ee89ecc: Gained IPv6LL Nov 1 00:43:04.279293 systemd-networkd[1060]: lxcf504172ce464: Gained IPv6LL Nov 1 00:43:06.313642 kubelet[2047]: I1101 00:43:06.313600 2047 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:06.314252 kubelet[2047]: E1101 00:43:06.314222 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:06.748604 env[1294]: time="2025-11-01T00:43:06.736955210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:06.748604 env[1294]: time="2025-11-01T00:43:06.737003996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:06.748604 env[1294]: time="2025-11-01T00:43:06.737015998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:06.748604 env[1294]: time="2025-11-01T00:43:06.737252491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec582890fc4ec70f4f1619d2b557abfe0d5a5c1df77a42f7c7f3144ce2db5b5b pid=3239 runtime=io.containerd.runc.v2 Nov 1 00:43:06.770046 systemd[1]: run-containerd-runc-k8s.io-ec582890fc4ec70f4f1619d2b557abfe0d5a5c1df77a42f7c7f3144ce2db5b5b-runc.5EGjGZ.mount: Deactivated successfully. Nov 1 00:43:06.795948 env[1294]: time="2025-11-01T00:43:06.793358754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:06.795948 env[1294]: time="2025-11-01T00:43:06.793499762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:06.795948 env[1294]: time="2025-11-01T00:43:06.793516725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:06.795948 env[1294]: time="2025-11-01T00:43:06.793825395Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a379eef6b4d16038caa94bc6d52b8efbf4729451574fbaa149887233a0376f3 pid=3267 runtime=io.containerd.runc.v2 Nov 1 00:43:06.908155 env[1294]: time="2025-11-01T00:43:06.908102059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jkhz7,Uid:b9e8281f-e331-47d1-8836-8e198677f625,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec582890fc4ec70f4f1619d2b557abfe0d5a5c1df77a42f7c7f3144ce2db5b5b\"" Nov 1 00:43:06.912438 kubelet[2047]: E1101 00:43:06.910189 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:06.917849 env[1294]: time="2025-11-01T00:43:06.917356092Z" level=info msg="CreateContainer within sandbox \"ec582890fc4ec70f4f1619d2b557abfe0d5a5c1df77a42f7c7f3144ce2db5b5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:06.932636 env[1294]: time="2025-11-01T00:43:06.932580273Z" level=info msg="CreateContainer within sandbox \"ec582890fc4ec70f4f1619d2b557abfe0d5a5c1df77a42f7c7f3144ce2db5b5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce358f8ee2e8d2cf7daaa226e662840cfdb9c3286b83b7c21d73558048667585\"" Nov 1 00:43:06.935450 env[1294]: time="2025-11-01T00:43:06.935387390Z" level=info msg="StartContainer for \"ce358f8ee2e8d2cf7daaa226e662840cfdb9c3286b83b7c21d73558048667585\"" Nov 1 00:43:06.941461 env[1294]: time="2025-11-01T00:43:06.941406528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4kvqm,Uid:34fba01d-f737-418a-91a5-d16b3c359ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a379eef6b4d16038caa94bc6d52b8efbf4729451574fbaa149887233a0376f3\"" Nov 1 00:43:06.944881 kubelet[2047]: E1101 00:43:06.942506 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:06.947831 env[1294]: time="2025-11-01T00:43:06.947787359Z" level=info msg="CreateContainer within sandbox \"9a379eef6b4d16038caa94bc6d52b8efbf4729451574fbaa149887233a0376f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:06.970751 env[1294]: time="2025-11-01T00:43:06.970690955Z" level=info msg="CreateContainer within sandbox \"9a379eef6b4d16038caa94bc6d52b8efbf4729451574fbaa149887233a0376f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11918da63c8e14b028fe87cd81fb70a67257a911922ddf7cbdf4b33973fced68\"" Nov 1 00:43:06.973941 env[1294]: time="2025-11-01T00:43:06.973878198Z" level=info msg="StartContainer for \"11918da63c8e14b028fe87cd81fb70a67257a911922ddf7cbdf4b33973fced68\"" Nov 1 00:43:07.021489 env[1294]: time="2025-11-01T00:43:07.021380171Z" level=info msg="StartContainer for \"ce358f8ee2e8d2cf7daaa226e662840cfdb9c3286b83b7c21d73558048667585\" returns successfully" Nov 1 00:43:07.049064 env[1294]: time="2025-11-01T00:43:07.049008131Z" level=info msg="StartContainer for \"11918da63c8e14b028fe87cd81fb70a67257a911922ddf7cbdf4b33973fced68\" returns successfully" Nov 1 00:43:07.200636 kubelet[2047]: E1101 00:43:07.200546 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:07.204521 kubelet[2047]: E1101 00:43:07.204486 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:07.204818 kubelet[2047]: E1101 00:43:07.204618 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:07.232565 kubelet[2047]: I1101 00:43:07.232504 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4kvqm" podStartSLOduration=24.232484778 podStartE2EDuration="24.232484778s" podCreationTimestamp="2025-11-01 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:07.231580253 +0000 UTC m=+29.435335003" watchObservedRunningTime="2025-11-01 00:43:07.232484778 +0000 UTC m=+29.436239532" Nov 1 00:43:07.248368 kubelet[2047]: I1101 00:43:07.248299 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jkhz7" podStartSLOduration=24.248274604 podStartE2EDuration="24.248274604s" podCreationTimestamp="2025-11-01 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:07.246000347 +0000 UTC m=+29.449755100" watchObservedRunningTime="2025-11-01 00:43:07.248274604 +0000 UTC m=+29.452029351" Nov 1 00:43:07.743529 systemd[1]: run-containerd-runc-k8s.io-9a379eef6b4d16038caa94bc6d52b8efbf4729451574fbaa149887233a0376f3-runc.sQGIvs.mount: Deactivated successfully. Nov 1 00:43:08.207935 kubelet[2047]: E1101 00:43:08.207894 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:08.208698 kubelet[2047]: E1101 00:43:08.208654 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:09.210830 kubelet[2047]: E1101 00:43:09.210797 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:09.211532 kubelet[2047]: E1101 00:43:09.211497 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:43:23.755156 systemd[1]: Started sshd@5-64.23.181.132:22-139.178.89.65:60672.service. Nov 1 00:43:23.827408 sshd[3400]: Accepted publickey for core from 139.178.89.65 port 60672 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:23.829709 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:23.836302 systemd-logind[1286]: New session 6 of user core. Nov 1 00:43:23.837107 systemd[1]: Started session-6.scope. Nov 1 00:43:24.040706 sshd[3400]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:24.045352 systemd-logind[1286]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:43:24.045749 systemd[1]: sshd@5-64.23.181.132:22-139.178.89.65:60672.service: Deactivated successfully. Nov 1 00:43:24.046923 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:43:24.047762 systemd-logind[1286]: Removed session 6. Nov 1 00:43:29.044309 systemd[1]: Started sshd@6-64.23.181.132:22-139.178.89.65:36600.service. Nov 1 00:43:29.095372 sshd[3413]: Accepted publickey for core from 139.178.89.65 port 36600 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:29.097021 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:29.104169 systemd[1]: Started session-7.scope. Nov 1 00:43:29.104626 systemd-logind[1286]: New session 7 of user core. Nov 1 00:43:29.265337 sshd[3413]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:29.268662 systemd[1]: sshd@6-64.23.181.132:22-139.178.89.65:36600.service: Deactivated successfully. Nov 1 00:43:29.269609 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:29.270443 systemd-logind[1286]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:29.271347 systemd-logind[1286]: Removed session 7. Nov 1 00:43:34.272177 systemd[1]: Started sshd@7-64.23.181.132:22-139.178.89.65:36606.service. Nov 1 00:43:34.330261 sshd[3427]: Accepted publickey for core from 139.178.89.65 port 36606 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:34.332672 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:34.338344 systemd[1]: Started session-8.scope. Nov 1 00:43:34.338890 systemd-logind[1286]: New session 8 of user core. Nov 1 00:43:34.497713 sshd[3427]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:34.500692 systemd[1]: sshd@7-64.23.181.132:22-139.178.89.65:36606.service: Deactivated successfully. Nov 1 00:43:34.502130 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:34.502266 systemd-logind[1286]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:34.503733 systemd-logind[1286]: Removed session 8. Nov 1 00:43:39.503087 systemd[1]: Started sshd@8-64.23.181.132:22-139.178.89.65:38910.service. Nov 1 00:43:39.565188 sshd[3443]: Accepted publickey for core from 139.178.89.65 port 38910 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:39.567263 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:39.572633 systemd-logind[1286]: New session 9 of user core. Nov 1 00:43:39.573114 systemd[1]: Started session-9.scope. Nov 1 00:43:39.723376 sshd[3443]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:39.727554 systemd-logind[1286]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:43:39.727990 systemd[1]: sshd@8-64.23.181.132:22-139.178.89.65:38910.service: Deactivated successfully. Nov 1 00:43:39.729125 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:43:39.729764 systemd-logind[1286]: Removed session 9. Nov 1 00:43:44.729172 systemd[1]: Started sshd@9-64.23.181.132:22-139.178.89.65:38912.service. Nov 1 00:43:44.787401 sshd[3457]: Accepted publickey for core from 139.178.89.65 port 38912 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:44.790007 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:44.795420 systemd-logind[1286]: New session 10 of user core. Nov 1 00:43:44.797048 systemd[1]: Started session-10.scope. Nov 1 00:43:44.957510 sshd[3457]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:44.962282 systemd[1]: Started sshd@10-64.23.181.132:22-139.178.89.65:38916.service. Nov 1 00:43:44.972040 systemd-logind[1286]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:43:44.973153 systemd[1]: sshd@9-64.23.181.132:22-139.178.89.65:38912.service: Deactivated successfully. Nov 1 00:43:44.975262 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:43:44.976679 systemd-logind[1286]: Removed session 10. Nov 1 00:43:45.026695 sshd[3468]: Accepted publickey for core from 139.178.89.65 port 38916 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:45.028953 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:45.034945 systemd[1]: Started session-11.scope. Nov 1 00:43:45.035727 systemd-logind[1286]: New session 11 of user core. Nov 1 00:43:45.262536 systemd[1]: Started sshd@11-64.23.181.132:22-139.178.89.65:38932.service. Nov 1 00:43:45.272799 sshd[3468]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:45.284675 systemd[1]: sshd@10-64.23.181.132:22-139.178.89.65:38916.service: Deactivated successfully. Nov 1 00:43:45.286506 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:43:45.287799 systemd-logind[1286]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:43:45.291445 systemd-logind[1286]: Removed session 11. Nov 1 00:43:45.343381 sshd[3478]: Accepted publickey for core from 139.178.89.65 port 38932 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:45.345440 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:45.350842 systemd-logind[1286]: New session 12 of user core. Nov 1 00:43:45.351934 systemd[1]: Started session-12.scope. Nov 1 00:43:45.503377 sshd[3478]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:45.507490 systemd[1]: sshd@11-64.23.181.132:22-139.178.89.65:38932.service: Deactivated successfully. Nov 1 00:43:45.508613 systemd-logind[1286]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:43:45.508731 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:43:45.509995 systemd-logind[1286]: Removed session 12. Nov 1 00:43:50.508934 systemd[1]: Started sshd@12-64.23.181.132:22-139.178.89.65:39684.service. Nov 1 00:43:50.567499 sshd[3495]: Accepted publickey for core from 139.178.89.65 port 39684 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:50.569286 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:50.574893 systemd-logind[1286]: New session 13 of user core. Nov 1 00:43:50.575584 systemd[1]: Started session-13.scope. Nov 1 00:43:50.720153 sshd[3495]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:50.723447 systemd-logind[1286]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:43:50.723671 systemd[1]: sshd@12-64.23.181.132:22-139.178.89.65:39684.service: Deactivated successfully. Nov 1 00:43:50.724695 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:43:50.726186 systemd-logind[1286]: Removed session 13. Nov 1 00:43:55.724868 systemd[1]: Started sshd@13-64.23.181.132:22-139.178.89.65:39696.service. Nov 1 00:43:55.783510 sshd[3508]: Accepted publickey for core from 139.178.89.65 port 39696 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:43:55.786181 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:55.792330 systemd[1]: Started session-14.scope. Nov 1 00:43:55.792925 systemd-logind[1286]: New session 14 of user core. Nov 1 00:43:55.943266 sshd[3508]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:55.947525 systemd[1]: sshd@13-64.23.181.132:22-139.178.89.65:39696.service: Deactivated successfully. Nov 1 00:43:55.949429 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:43:55.949489 systemd-logind[1286]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:43:55.950664 systemd-logind[1286]: Removed session 14. Nov 1 00:43:59.053260 kubelet[2047]: E1101 00:43:59.053197 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:00.950173 systemd[1]: Started sshd@14-64.23.181.132:22-139.178.89.65:58636.service. Nov 1 00:44:01.013678 sshd[3521]: Accepted publickey for core from 139.178.89.65 port 58636 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:01.016162 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:01.022166 systemd-logind[1286]: New session 15 of user core. Nov 1 00:44:01.023046 systemd[1]: Started session-15.scope. Nov 1 00:44:01.054662 kubelet[2047]: E1101 00:44:01.054231 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:01.189299 sshd[3521]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:01.194588 systemd[1]: Started sshd@15-64.23.181.132:22-139.178.89.65:58650.service. Nov 1 00:44:01.197377 systemd[1]: sshd@14-64.23.181.132:22-139.178.89.65:58636.service: Deactivated successfully. Nov 1 00:44:01.199250 systemd-logind[1286]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:44:01.199526 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:44:01.200984 systemd-logind[1286]: Removed session 15. Nov 1 00:44:01.256597 sshd[3532]: Accepted publickey for core from 139.178.89.65 port 58650 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:01.259103 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:01.265165 systemd-logind[1286]: New session 16 of user core. Nov 1 00:44:01.265970 systemd[1]: Started session-16.scope. Nov 1 00:44:01.652475 sshd[3532]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:01.659450 systemd[1]: Started sshd@16-64.23.181.132:22-139.178.89.65:58654.service. Nov 1 00:44:01.662328 systemd[1]: sshd@15-64.23.181.132:22-139.178.89.65:58650.service: Deactivated successfully. Nov 1 00:44:01.664908 systemd-logind[1286]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:44:01.665710 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:44:01.667122 systemd-logind[1286]: Removed session 16. Nov 1 00:44:01.728244 sshd[3543]: Accepted publickey for core from 139.178.89.65 port 58654 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:01.731484 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:01.738357 systemd-logind[1286]: New session 17 of user core. Nov 1 00:44:01.739667 systemd[1]: Started session-17.scope. Nov 1 00:44:02.503495 sshd[3543]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:02.510486 systemd[1]: Started sshd@17-64.23.181.132:22-139.178.89.65:58660.service. Nov 1 00:44:02.516827 systemd[1]: sshd@16-64.23.181.132:22-139.178.89.65:58654.service: Deactivated successfully. Nov 1 00:44:02.518962 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:44:02.520677 systemd-logind[1286]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:44:02.523538 systemd-logind[1286]: Removed session 17. Nov 1 00:44:02.581840 sshd[3561]: Accepted publickey for core from 139.178.89.65 port 58660 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:02.581657 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:02.588020 systemd[1]: Started session-18.scope. Nov 1 00:44:02.588889 systemd-logind[1286]: New session 18 of user core. Nov 1 00:44:02.922295 sshd[3561]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:02.923949 systemd[1]: Started sshd@18-64.23.181.132:22-139.178.89.65:58664.service. Nov 1 00:44:02.933936 systemd[1]: sshd@17-64.23.181.132:22-139.178.89.65:58660.service: Deactivated successfully. Nov 1 00:44:02.936431 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:44:02.936483 systemd-logind[1286]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:44:02.939332 systemd-logind[1286]: Removed session 18. Nov 1 00:44:02.994039 sshd[3572]: Accepted publickey for core from 139.178.89.65 port 58664 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:02.996304 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:03.011030 systemd[1]: Started session-19.scope. Nov 1 00:44:03.012665 systemd-logind[1286]: New session 19 of user core. Nov 1 00:44:03.152322 sshd[3572]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:03.156705 systemd-logind[1286]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:44:03.157262 systemd[1]: sshd@18-64.23.181.132:22-139.178.89.65:58664.service: Deactivated successfully. Nov 1 00:44:03.158455 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:44:03.160734 systemd-logind[1286]: Removed session 19. Nov 1 00:44:04.053646 kubelet[2047]: E1101 00:44:04.053597 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:08.158775 systemd[1]: Started sshd@19-64.23.181.132:22-139.178.89.65:51962.service. Nov 1 00:44:08.221808 sshd[3586]: Accepted publickey for core from 139.178.89.65 port 51962 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:08.224220 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:08.230219 systemd[1]: Started session-20.scope. Nov 1 00:44:08.231412 systemd-logind[1286]: New session 20 of user core. Nov 1 00:44:08.381510 sshd[3586]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:08.384749 systemd-logind[1286]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:44:08.384931 systemd[1]: sshd@19-64.23.181.132:22-139.178.89.65:51962.service: Deactivated successfully. Nov 1 00:44:08.385850 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:44:08.386638 systemd-logind[1286]: Removed session 20. Nov 1 00:44:10.053898 kubelet[2047]: E1101 00:44:10.053846 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:11.053154 kubelet[2047]: E1101 00:44:11.053113 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:13.384394 systemd[1]: Started sshd@20-64.23.181.132:22-139.178.89.65:51974.service. Nov 1 00:44:13.442992 sshd[3601]: Accepted publickey for core from 139.178.89.65 port 51974 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:13.445298 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:13.451126 systemd-logind[1286]: New session 21 of user core. Nov 1 00:44:13.452433 systemd[1]: Started session-21.scope. Nov 1 00:44:13.592322 sshd[3601]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:13.595752 systemd-logind[1286]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:44:13.596323 systemd[1]: sshd@20-64.23.181.132:22-139.178.89.65:51974.service: Deactivated successfully. Nov 1 00:44:13.597780 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:44:13.598661 systemd-logind[1286]: Removed session 21. Nov 1 00:44:14.053117 kubelet[2047]: E1101 00:44:14.052672 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:18.596438 systemd[1]: Started sshd@21-64.23.181.132:22-139.178.89.65:55258.service. Nov 1 00:44:18.648566 sshd[3616]: Accepted publickey for core from 139.178.89.65 port 55258 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:18.650990 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:18.655961 systemd-logind[1286]: New session 22 of user core. Nov 1 00:44:18.656695 systemd[1]: Started session-22.scope. Nov 1 00:44:18.791227 sshd[3616]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:18.794769 systemd[1]: sshd@21-64.23.181.132:22-139.178.89.65:55258.service: Deactivated successfully. Nov 1 00:44:18.797556 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:44:18.798874 systemd-logind[1286]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:44:18.800886 systemd-logind[1286]: Removed session 22. Nov 1 00:44:23.794762 systemd[1]: Started sshd@22-64.23.181.132:22-139.178.89.65:55272.service. Nov 1 00:44:23.846243 sshd[3629]: Accepted publickey for core from 139.178.89.65 port 55272 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:23.848446 sshd[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:23.853161 systemd-logind[1286]: New session 23 of user core. Nov 1 00:44:23.853887 systemd[1]: Started session-23.scope. Nov 1 00:44:23.986317 sshd[3629]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:23.990060 systemd[1]: sshd@22-64.23.181.132:22-139.178.89.65:55272.service: Deactivated successfully. Nov 1 00:44:23.991145 systemd-logind[1286]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:44:23.991210 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:44:23.992217 systemd-logind[1286]: Removed session 23. Nov 1 00:44:28.992156 systemd[1]: Started sshd@23-64.23.181.132:22-139.178.89.65:42784.service. Nov 1 00:44:29.049053 sshd[3642]: Accepted publickey for core from 139.178.89.65 port 42784 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:29.051363 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:29.056837 systemd[1]: Started session-24.scope. Nov 1 00:44:29.058120 systemd-logind[1286]: New session 24 of user core. Nov 1 00:44:29.210199 sshd[3642]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:29.213895 systemd[1]: sshd@23-64.23.181.132:22-139.178.89.65:42784.service: Deactivated successfully. Nov 1 00:44:29.215846 systemd-logind[1286]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:44:29.215971 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:44:29.217286 systemd-logind[1286]: Removed session 24. Nov 1 00:44:33.053104 kubelet[2047]: E1101 00:44:33.053031 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:34.217155 systemd[1]: Started sshd@24-64.23.181.132:22-139.178.89.65:42796.service. Nov 1 00:44:34.281206 sshd[3655]: Accepted publickey for core from 139.178.89.65 port 42796 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:34.283427 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:34.289350 systemd[1]: Started session-25.scope. Nov 1 00:44:34.290446 systemd-logind[1286]: New session 25 of user core. Nov 1 00:44:34.426596 sshd[3655]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:34.431102 systemd[1]: Started sshd@25-64.23.181.132:22-139.178.89.65:42810.service. Nov 1 00:44:34.437953 systemd[1]: sshd@24-64.23.181.132:22-139.178.89.65:42796.service: Deactivated successfully. Nov 1 00:44:34.438885 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:44:34.440817 systemd-logind[1286]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:44:34.441953 systemd-logind[1286]: Removed session 25. Nov 1 00:44:34.489897 sshd[3666]: Accepted publickey for core from 139.178.89.65 port 42810 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:34.492251 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:34.498508 systemd[1]: Started session-26.scope. Nov 1 00:44:34.499146 systemd-logind[1286]: New session 26 of user core. Nov 1 00:44:35.942438 systemd[1]: run-containerd-runc-k8s.io-3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635-runc.JkewPn.mount: Deactivated successfully. Nov 1 00:44:35.962659 env[1294]: time="2025-11-01T00:44:35.962616182Z" level=info msg="StopContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" with timeout 30 (s)" Nov 1 00:44:35.963532 env[1294]: time="2025-11-01T00:44:35.963505013Z" level=info msg="Stop container \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" with signal terminated" Nov 1 00:44:35.985200 env[1294]: time="2025-11-01T00:44:35.985137262Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:44:35.988523 env[1294]: time="2025-11-01T00:44:35.988485923Z" level=info msg="StopContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" with timeout 2 (s)" Nov 1 00:44:35.989278 env[1294]: time="2025-11-01T00:44:35.989243870Z" level=info msg="Stop container \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" with signal terminated" Nov 1 00:44:36.001243 systemd-networkd[1060]: lxc_health: Link DOWN Nov 1 00:44:36.001252 systemd-networkd[1060]: lxc_health: Lost carrier Nov 1 00:44:36.053091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289-rootfs.mount: Deactivated successfully. Nov 1 00:44:36.058982 env[1294]: time="2025-11-01T00:44:36.058934523Z" level=info msg="shim disconnected" id=711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289 Nov 1 00:44:36.058982 env[1294]: time="2025-11-01T00:44:36.058982084Z" level=warning msg="cleaning up after shim disconnected" id=711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289 namespace=k8s.io Nov 1 00:44:36.059241 env[1294]: time="2025-11-01T00:44:36.058994997Z" level=info msg="cleaning up dead shim" Nov 1 00:44:36.070892 env[1294]: time="2025-11-01T00:44:36.070840155Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3727 runtime=io.containerd.runc.v2\n" Nov 1 00:44:36.072553 env[1294]: time="2025-11-01T00:44:36.072500406Z" level=info msg="StopContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" returns successfully" Nov 1 00:44:36.076055 env[1294]: time="2025-11-01T00:44:36.075996113Z" level=info msg="StopPodSandbox for \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\"" Nov 1 00:44:36.079769 env[1294]: time="2025-11-01T00:44:36.076139901Z" level=info msg="Container to stop \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.078790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724-shm.mount: Deactivated successfully. Nov 1 00:44:36.101874 env[1294]: time="2025-11-01T00:44:36.101824827Z" level=info msg="shim disconnected" id=3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635 Nov 1 00:44:36.102150 env[1294]: time="2025-11-01T00:44:36.102127975Z" level=warning msg="cleaning up after shim disconnected" id=3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635 namespace=k8s.io Nov 1 00:44:36.102241 env[1294]: time="2025-11-01T00:44:36.102224671Z" level=info msg="cleaning up dead shim" Nov 1 00:44:36.117942 env[1294]: time="2025-11-01T00:44:36.117876677Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3758 runtime=io.containerd.runc.v2\n" Nov 1 00:44:36.119988 env[1294]: time="2025-11-01T00:44:36.119942469Z" level=info msg="StopContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" returns successfully" Nov 1 00:44:36.120694 env[1294]: time="2025-11-01T00:44:36.120663855Z" level=info msg="StopPodSandbox for \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\"" Nov 1 00:44:36.120913 env[1294]: time="2025-11-01T00:44:36.120882713Z" level=info msg="Container to stop \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.121116 env[1294]: time="2025-11-01T00:44:36.121068100Z" level=info msg="Container to stop \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.121249 env[1294]: time="2025-11-01T00:44:36.121225267Z" level=info msg="Container to stop \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.121344 env[1294]: time="2025-11-01T00:44:36.121322663Z" level=info msg="Container to stop \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.121456 env[1294]: time="2025-11-01T00:44:36.121431690Z" level=info msg="Container to stop \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:36.136838 env[1294]: time="2025-11-01T00:44:36.136763727Z" level=info msg="shim disconnected" id=6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724 Nov 1 00:44:36.136838 env[1294]: time="2025-11-01T00:44:36.136831341Z" level=warning msg="cleaning up after shim disconnected" id=6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724 namespace=k8s.io Nov 1 00:44:36.136838 env[1294]: time="2025-11-01T00:44:36.136847116Z" level=info msg="cleaning up dead shim" Nov 1 00:44:36.154049 env[1294]: time="2025-11-01T00:44:36.153978821Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3785 runtime=io.containerd.runc.v2\n" Nov 1 00:44:36.154564 env[1294]: time="2025-11-01T00:44:36.154530137Z" level=info msg="TearDown network for sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" successfully" Nov 1 00:44:36.154693 env[1294]: time="2025-11-01T00:44:36.154674080Z" level=info msg="StopPodSandbox for \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" returns successfully" Nov 1 00:44:36.204814 env[1294]: time="2025-11-01T00:44:36.203999350Z" level=info msg="shim disconnected" id=f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6 Nov 1 00:44:36.205201 env[1294]: time="2025-11-01T00:44:36.205166424Z" level=warning msg="cleaning up after shim disconnected" id=f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6 namespace=k8s.io Nov 1 00:44:36.205321 env[1294]: time="2025-11-01T00:44:36.205303567Z" level=info msg="cleaning up dead shim" Nov 1 00:44:36.218825 env[1294]: time="2025-11-01T00:44:36.218770535Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3818 runtime=io.containerd.runc.v2\n" Nov 1 00:44:36.219543 env[1294]: time="2025-11-01T00:44:36.219492586Z" level=info msg="TearDown network for sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" successfully" Nov 1 00:44:36.219861 env[1294]: time="2025-11-01T00:44:36.219818319Z" level=info msg="StopPodSandbox for \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" returns successfully" Nov 1 00:44:36.317484 kubelet[2047]: I1101 00:44:36.317365 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg2rg\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318189 kubelet[2047]: I1101 00:44:36.318152 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cni-path\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318386 kubelet[2047]: I1101 00:44:36.318368 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a87caee-41be-4140-b973-d086be9585f5-cilium-config-path\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318476 kubelet[2047]: I1101 00:44:36.318463 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1567bd-bf97-4078-83dd-335d5ef0941c-cilium-config-path\") pod \"2b1567bd-bf97-4078-83dd-335d5ef0941c\" (UID: \"2b1567bd-bf97-4078-83dd-335d5ef0941c\") " Nov 1 00:44:36.318559 kubelet[2047]: I1101 00:44:36.318547 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a87caee-41be-4140-b973-d086be9585f5-clustermesh-secrets\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318630 kubelet[2047]: I1101 00:44:36.318618 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-hostproc\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318723 kubelet[2047]: I1101 00:44:36.318710 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318801 kubelet[2047]: I1101 00:44:36.318789 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-cgroup\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.318894 kubelet[2047]: I1101 00:44:36.318874 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-kernel\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319009 kubelet[2047]: I1101 00:44:36.318993 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-run\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319148 kubelet[2047]: I1101 00:44:36.319131 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxp9m\" (UniqueName: \"kubernetes.io/projected/2b1567bd-bf97-4078-83dd-335d5ef0941c-kube-api-access-dxp9m\") pod \"2b1567bd-bf97-4078-83dd-335d5ef0941c\" (UID: \"2b1567bd-bf97-4078-83dd-335d5ef0941c\") " Nov 1 00:44:36.319266 kubelet[2047]: I1101 00:44:36.319248 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-lib-modules\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319378 kubelet[2047]: I1101 00:44:36.319361 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-xtables-lock\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319463 kubelet[2047]: I1101 00:44:36.319451 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-net\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319534 kubelet[2047]: I1101 00:44:36.319522 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-bpf-maps\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.319644 kubelet[2047]: I1101 00:44:36.319597 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-etc-cni-netd\") pod \"6a87caee-41be-4140-b973-d086be9585f5\" (UID: \"6a87caee-41be-4140-b973-d086be9585f5\") " Nov 1 00:44:36.327680 kubelet[2047]: I1101 00:44:36.327607 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg" (OuterVolumeSpecName: "kube-api-access-qg2rg") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "kube-api-access-qg2rg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:36.327854 kubelet[2047]: I1101 00:44:36.327734 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.327854 kubelet[2047]: I1101 00:44:36.327768 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.328671 kubelet[2047]: I1101 00:44:36.328637 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.328806 kubelet[2047]: I1101 00:44:36.328791 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.330874 kubelet[2047]: I1101 00:44:36.330822 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a87caee-41be-4140-b973-d086be9585f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:36.332437 kubelet[2047]: I1101 00:44:36.332410 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1567bd-bf97-4078-83dd-335d5ef0941c-kube-api-access-dxp9m" (OuterVolumeSpecName: "kube-api-access-dxp9m") pod "2b1567bd-bf97-4078-83dd-335d5ef0941c" (UID: "2b1567bd-bf97-4078-83dd-335d5ef0941c"). InnerVolumeSpecName "kube-api-access-dxp9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:36.332605 kubelet[2047]: I1101 00:44:36.332589 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.332721 kubelet[2047]: I1101 00:44:36.332707 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.332800 kubelet[2047]: I1101 00:44:36.332788 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.332971 kubelet[2047]: I1101 00:44:36.332955 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.333039 kubelet[2047]: I1101 00:44:36.322475 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.333133 kubelet[2047]: I1101 00:44:36.333120 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:36.334353 kubelet[2047]: I1101 00:44:36.334316 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b1567bd-bf97-4078-83dd-335d5ef0941c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b1567bd-bf97-4078-83dd-335d5ef0941c" (UID: "2b1567bd-bf97-4078-83dd-335d5ef0941c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:36.336584 kubelet[2047]: I1101 00:44:36.336557 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a87caee-41be-4140-b973-d086be9585f5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:36.338493 kubelet[2047]: I1101 00:44:36.338453 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a87caee-41be-4140-b973-d086be9585f5" (UID: "6a87caee-41be-4140-b973-d086be9585f5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:36.403829 kubelet[2047]: I1101 00:44:36.403790 2047 scope.go:117] "RemoveContainer" containerID="711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289" Nov 1 00:44:36.408827 env[1294]: time="2025-11-01T00:44:36.408360772Z" level=info msg="RemoveContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\"" Nov 1 00:44:36.413275 env[1294]: time="2025-11-01T00:44:36.413224846Z" level=info msg="RemoveContainer for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" returns successfully" Nov 1 00:44:36.424296 kubelet[2047]: I1101 00:44:36.424256 2047 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-xtables-lock\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424476 kubelet[2047]: I1101 00:44:36.424460 2047 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-net\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424566 kubelet[2047]: I1101 00:44:36.424553 2047 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-bpf-maps\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424630 kubelet[2047]: I1101 00:44:36.424619 2047 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-etc-cni-netd\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424692 kubelet[2047]: I1101 00:44:36.424681 2047 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cni-path\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424775 kubelet[2047]: I1101 00:44:36.424763 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a87caee-41be-4140-b973-d086be9585f5-cilium-config-path\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424855 kubelet[2047]: I1101 00:44:36.424837 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b1567bd-bf97-4078-83dd-335d5ef0941c-cilium-config-path\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.424946 kubelet[2047]: I1101 00:44:36.424929 2047 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qg2rg\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-kube-api-access-qg2rg\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425027 kubelet[2047]: I1101 00:44:36.425015 2047 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-hostproc\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425117 kubelet[2047]: I1101 00:44:36.425105 2047 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a87caee-41be-4140-b973-d086be9585f5-clustermesh-secrets\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425188 kubelet[2047]: I1101 00:44:36.425176 2047 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a87caee-41be-4140-b973-d086be9585f5-hubble-tls\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425256 kubelet[2047]: I1101 00:44:36.425246 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-cgroup\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425318 kubelet[2047]: I1101 00:44:36.425307 2047 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425717 kubelet[2047]: I1101 00:44:36.425401 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-cilium-run\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425717 kubelet[2047]: I1101 00:44:36.425414 2047 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxp9m\" (UniqueName: \"kubernetes.io/projected/2b1567bd-bf97-4078-83dd-335d5ef0941c-kube-api-access-dxp9m\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.425717 kubelet[2047]: I1101 00:44:36.425476 2047 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a87caee-41be-4140-b973-d086be9585f5-lib-modules\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:36.428051 kubelet[2047]: I1101 00:44:36.428023 2047 scope.go:117] "RemoveContainer" containerID="711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289" Nov 1 00:44:36.428652 env[1294]: time="2025-11-01T00:44:36.428516210Z" level=error msg="ContainerStatus for \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\": not found" Nov 1 00:44:36.429799 kubelet[2047]: E1101 00:44:36.429778 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\": not found" containerID="711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289" Nov 1 00:44:36.430102 kubelet[2047]: I1101 00:44:36.429948 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289"} err="failed to get container status \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\": rpc error: code = NotFound desc = an error occurred when try to find container \"711f020c5a2255eaeaaa00e8a3997fc7d5cb36912dece2b556d96e360e077289\": not found" Nov 1 00:44:36.430226 kubelet[2047]: I1101 00:44:36.430211 2047 scope.go:117] "RemoveContainer" containerID="3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635" Nov 1 00:44:36.431782 env[1294]: time="2025-11-01T00:44:36.431740656Z" level=info msg="RemoveContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\"" Nov 1 00:44:36.438830 env[1294]: time="2025-11-01T00:44:36.438709941Z" level=info msg="RemoveContainer for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" returns successfully" Nov 1 00:44:36.439348 kubelet[2047]: I1101 00:44:36.439317 2047 scope.go:117] "RemoveContainer" containerID="90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754" Nov 1 00:44:36.442728 env[1294]: time="2025-11-01T00:44:36.442676274Z" level=info msg="RemoveContainer for \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\"" Nov 1 00:44:36.445909 env[1294]: time="2025-11-01T00:44:36.445863629Z" level=info msg="RemoveContainer for \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\" returns successfully" Nov 1 00:44:36.446161 kubelet[2047]: I1101 00:44:36.446142 2047 scope.go:117] "RemoveContainer" containerID="80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d" Nov 1 00:44:36.447474 env[1294]: time="2025-11-01T00:44:36.447437081Z" level=info msg="RemoveContainer for \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\"" Nov 1 00:44:36.451449 env[1294]: time="2025-11-01T00:44:36.450960321Z" level=info msg="RemoveContainer for \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\" returns successfully" Nov 1 00:44:36.451789 kubelet[2047]: I1101 00:44:36.451722 2047 scope.go:117] "RemoveContainer" containerID="e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5" Nov 1 00:44:36.453735 env[1294]: time="2025-11-01T00:44:36.453408954Z" level=info msg="RemoveContainer for \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\"" Nov 1 00:44:36.460608 env[1294]: time="2025-11-01T00:44:36.460237001Z" level=info msg="RemoveContainer for \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\" returns successfully" Nov 1 00:44:36.462960 kubelet[2047]: I1101 00:44:36.462919 2047 scope.go:117] "RemoveContainer" containerID="53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a" Nov 1 00:44:36.470017 env[1294]: time="2025-11-01T00:44:36.469962794Z" level=info msg="RemoveContainer for \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\"" Nov 1 00:44:36.472814 env[1294]: time="2025-11-01T00:44:36.472765014Z" level=info msg="RemoveContainer for \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\" returns successfully" Nov 1 00:44:36.478142 kubelet[2047]: I1101 00:44:36.478091 2047 scope.go:117] "RemoveContainer" containerID="3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635" Nov 1 00:44:36.479107 env[1294]: time="2025-11-01T00:44:36.478915168Z" level=error msg="ContainerStatus for \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\": not found" Nov 1 00:44:36.479246 kubelet[2047]: E1101 00:44:36.479192 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\": not found" containerID="3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635" Nov 1 00:44:36.479310 kubelet[2047]: I1101 00:44:36.479236 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635"} err="failed to get container status \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\": rpc error: code = NotFound desc = an error occurred when try to find container \"3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635\": not found" Nov 1 00:44:36.479310 kubelet[2047]: I1101 00:44:36.479269 2047 scope.go:117] "RemoveContainer" containerID="90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754" Nov 1 00:44:36.479646 env[1294]: time="2025-11-01T00:44:36.479560372Z" level=error msg="ContainerStatus for \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\": not found" Nov 1 00:44:36.479833 kubelet[2047]: E1101 00:44:36.479805 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\": not found" containerID="90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754" Nov 1 00:44:36.479911 kubelet[2047]: I1101 00:44:36.479840 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754"} err="failed to get container status \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\": rpc error: code = NotFound desc = an error occurred when try to find container \"90dae68b3f39557df24e8a839c41df4f05b640884eedd04f9f2ab7111f141754\": not found" Nov 1 00:44:36.479911 kubelet[2047]: I1101 00:44:36.479867 2047 scope.go:117] "RemoveContainer" containerID="80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d" Nov 1 00:44:36.480300 env[1294]: time="2025-11-01T00:44:36.480186299Z" level=error msg="ContainerStatus for \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\": not found" Nov 1 00:44:36.480417 kubelet[2047]: E1101 00:44:36.480354 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\": not found" containerID="80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d" Nov 1 00:44:36.480417 kubelet[2047]: I1101 00:44:36.480383 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d"} err="failed to get container status \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\": rpc error: code = NotFound desc = an error occurred when try to find container \"80d386639f852dcd904482496ddb7454148a2b2a601af8044125efacbf61f83d\": not found" Nov 1 00:44:36.480417 kubelet[2047]: I1101 00:44:36.480406 2047 scope.go:117] "RemoveContainer" containerID="e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5" Nov 1 00:44:36.480718 env[1294]: time="2025-11-01T00:44:36.480653082Z" level=error msg="ContainerStatus for \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\": not found" Nov 1 00:44:36.480873 kubelet[2047]: E1101 00:44:36.480845 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\": not found" containerID="e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5" Nov 1 00:44:36.480957 kubelet[2047]: I1101 00:44:36.480877 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5"} err="failed to get container status \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e72e464dc19ef10644f9ebe10f56f5307d5baf1772fa2fc6b502458faf1228c5\": not found" Nov 1 00:44:36.480957 kubelet[2047]: I1101 00:44:36.480898 2047 scope.go:117] "RemoveContainer" containerID="53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a" Nov 1 00:44:36.481358 env[1294]: time="2025-11-01T00:44:36.481252208Z" level=error msg="ContainerStatus for \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\": not found" Nov 1 00:44:36.481669 kubelet[2047]: E1101 00:44:36.481636 2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\": not found" containerID="53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a" Nov 1 00:44:36.481806 kubelet[2047]: I1101 00:44:36.481778 2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a"} err="failed to get container status \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\": rpc error: code = NotFound desc = an error occurred when try to find container \"53460e780ff152eec4260fb5fd6dc00ab32ec805c737f5d2520236bb5abf881a\": not found" Nov 1 00:44:36.935505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3776aed5335b0832730ff8acc570370ad7cc9c21a65e70215d509e883d5ec635-rootfs.mount: Deactivated successfully. Nov 1 00:44:36.936175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6-rootfs.mount: Deactivated successfully. Nov 1 00:44:36.936577 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6-shm.mount: Deactivated successfully. Nov 1 00:44:36.936856 systemd[1]: var-lib-kubelet-pods-6a87caee\x2d41be\x2d4140\x2db973\x2dd086be9585f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqg2rg.mount: Deactivated successfully. Nov 1 00:44:36.937276 systemd[1]: var-lib-kubelet-pods-6a87caee\x2d41be\x2d4140\x2db973\x2dd086be9585f5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:36.937597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724-rootfs.mount: Deactivated successfully. Nov 1 00:44:36.937929 systemd[1]: var-lib-kubelet-pods-2b1567bd\x2dbf97\x2d4078\x2d83dd\x2d335d5ef0941c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxp9m.mount: Deactivated successfully. Nov 1 00:44:36.938272 systemd[1]: var-lib-kubelet-pods-6a87caee\x2d41be\x2d4140\x2db973\x2dd086be9585f5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:37.870466 sshd[3666]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:37.876185 systemd[1]: Started sshd@26-64.23.181.132:22-139.178.89.65:58166.service. Nov 1 00:44:37.877091 systemd[1]: sshd@25-64.23.181.132:22-139.178.89.65:42810.service: Deactivated successfully. Nov 1 00:44:37.879846 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:44:37.880521 systemd-logind[1286]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:44:37.882473 systemd-logind[1286]: Removed session 26. Nov 1 00:44:37.937168 sshd[3834]: Accepted publickey for core from 139.178.89.65 port 58166 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:37.939010 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:37.944768 systemd-logind[1286]: New session 27 of user core. Nov 1 00:44:37.945294 systemd[1]: Started session-27.scope. Nov 1 00:44:38.010708 env[1294]: time="2025-11-01T00:44:38.010517591Z" level=info msg="StopPodSandbox for \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\"" Nov 1 00:44:38.010708 env[1294]: time="2025-11-01T00:44:38.010612959Z" level=info msg="TearDown network for sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" successfully" Nov 1 00:44:38.010708 env[1294]: time="2025-11-01T00:44:38.010646576Z" level=info msg="StopPodSandbox for \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" returns successfully" Nov 1 00:44:38.011461 env[1294]: time="2025-11-01T00:44:38.011326513Z" level=info msg="RemovePodSandbox for \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\"" Nov 1 00:44:38.011461 env[1294]: time="2025-11-01T00:44:38.011357930Z" level=info msg="Forcibly stopping sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\"" Nov 1 00:44:38.011571 env[1294]: time="2025-11-01T00:44:38.011468202Z" level=info msg="TearDown network for sandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" successfully" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.013884611Z" level=info msg="RemovePodSandbox \"6c95aff4b2a103b194dae4ed0c203515705477288899bc78cd334432ab349724\" returns successfully" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.014486041Z" level=info msg="StopPodSandbox for \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\"" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.014611484Z" level=info msg="TearDown network for sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" successfully" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.014648543Z" level=info msg="StopPodSandbox for \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" returns successfully" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.014999485Z" level=info msg="RemovePodSandbox for \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\"" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.015020801Z" level=info msg="Forcibly stopping sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\"" Nov 1 00:44:38.015777 env[1294]: time="2025-11-01T00:44:38.015109108Z" level=info msg="TearDown network for sandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" successfully" Nov 1 00:44:38.018180 env[1294]: time="2025-11-01T00:44:38.018044956Z" level=info msg="RemovePodSandbox \"f2270fbb651aeaddcc2cd59cf577a09eeab5ed71cc14135afb54f3c97eb203e6\" returns successfully" Nov 1 00:44:38.055700 kubelet[2047]: I1101 00:44:38.055656 2047 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1567bd-bf97-4078-83dd-335d5ef0941c" path="/var/lib/kubelet/pods/2b1567bd-bf97-4078-83dd-335d5ef0941c/volumes" Nov 1 00:44:38.056350 kubelet[2047]: I1101 00:44:38.056307 2047 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a87caee-41be-4140-b973-d086be9585f5" path="/var/lib/kubelet/pods/6a87caee-41be-4140-b973-d086be9585f5/volumes" Nov 1 00:44:38.163482 kubelet[2047]: E1101 00:44:38.163366 2047 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:38.699647 sshd[3834]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:38.704117 systemd[1]: sshd@26-64.23.181.132:22-139.178.89.65:58166.service: Deactivated successfully. Nov 1 00:44:38.708204 systemd[1]: Started sshd@27-64.23.181.132:22-139.178.89.65:58174.service. Nov 1 00:44:38.711588 systemd-logind[1286]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:44:38.711864 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:44:38.730393 systemd-logind[1286]: Removed session 27. Nov 1 00:44:38.756165 kubelet[2047]: I1101 00:44:38.756118 2047 memory_manager.go:355] "RemoveStaleState removing state" podUID="6a87caee-41be-4140-b973-d086be9585f5" containerName="cilium-agent" Nov 1 00:44:38.756165 kubelet[2047]: I1101 00:44:38.756146 2047 memory_manager.go:355] "RemoveStaleState removing state" podUID="2b1567bd-bf97-4078-83dd-335d5ef0941c" containerName="cilium-operator" Nov 1 00:44:38.785101 sshd[3849]: Accepted publickey for core from 139.178.89.65 port 58174 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:38.785719 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:38.792589 systemd[1]: Started session-28.scope. Nov 1 00:44:38.800408 systemd-logind[1286]: New session 28 of user core. Nov 1 00:44:38.844855 kubelet[2047]: I1101 00:44:38.844777 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-etc-cni-netd\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.844855 kubelet[2047]: I1101 00:44:38.844848 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-bpf-maps\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.844886 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-hostproc\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.844928 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-cgroup\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.844950 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-xtables-lock\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.844977 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-run\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.845014 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-ipsec-secrets\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845121 kubelet[2047]: I1101 00:44:38.845035 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-net\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845386 kubelet[2047]: I1101 00:44:38.845068 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-config-path\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845386 kubelet[2047]: I1101 00:44:38.845118 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-kernel\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845386 kubelet[2047]: I1101 00:44:38.845156 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cni-path\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845386 kubelet[2047]: I1101 00:44:38.845176 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-clustermesh-secrets\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845386 kubelet[2047]: I1101 00:44:38.845204 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgd5\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-kube-api-access-stgd5\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845561 kubelet[2047]: I1101 00:44:38.845237 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-lib-modules\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:38.845561 kubelet[2047]: I1101 00:44:38.845259 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-hubble-tls\") pod \"cilium-bh5vc\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " pod="kube-system/cilium-bh5vc" Nov 1 00:44:39.032583 sshd[3849]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:39.040730 systemd[1]: Started sshd@28-64.23.181.132:22-139.178.89.65:58188.service. Nov 1 00:44:39.047329 systemd-logind[1286]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:44:39.048726 systemd[1]: sshd@27-64.23.181.132:22-139.178.89.65:58174.service: Deactivated successfully. Nov 1 00:44:39.049792 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:44:39.052116 systemd-logind[1286]: Removed session 28. Nov 1 00:44:39.070098 kubelet[2047]: E1101 00:44:39.063753 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:39.070554 env[1294]: time="2025-11-01T00:44:39.066864521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bh5vc,Uid:b4b44c11-a283-4096-bacb-9eb2076ccee6,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:39.107360 env[1294]: time="2025-11-01T00:44:39.104364108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:39.107360 env[1294]: time="2025-11-01T00:44:39.104405798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:39.107360 env[1294]: time="2025-11-01T00:44:39.104418891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:39.107360 env[1294]: time="2025-11-01T00:44:39.104579704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c pid=3874 runtime=io.containerd.runc.v2 Nov 1 00:44:39.161840 sshd[3864]: Accepted publickey for core from 139.178.89.65 port 58188 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:44:39.163751 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:39.174789 systemd[1]: Started session-29.scope. Nov 1 00:44:39.175812 systemd-logind[1286]: New session 29 of user core. Nov 1 00:44:39.198471 env[1294]: time="2025-11-01T00:44:39.198415014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bh5vc,Uid:b4b44c11-a283-4096-bacb-9eb2076ccee6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\"" Nov 1 00:44:39.199749 kubelet[2047]: E1101 00:44:39.199492 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:39.203756 env[1294]: time="2025-11-01T00:44:39.203713021Z" level=info msg="CreateContainer within sandbox \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:44:39.215011 env[1294]: time="2025-11-01T00:44:39.214946810Z" level=info msg="CreateContainer within sandbox \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\"" Nov 1 00:44:39.217143 env[1294]: time="2025-11-01T00:44:39.216026584Z" level=info msg="StartContainer for \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\"" Nov 1 00:44:39.285167 env[1294]: time="2025-11-01T00:44:39.284060319Z" level=info msg="StartContainer for \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\" returns successfully" Nov 1 00:44:39.333639 env[1294]: time="2025-11-01T00:44:39.333587420Z" level=info msg="shim disconnected" id=4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7 Nov 1 00:44:39.340758 env[1294]: time="2025-11-01T00:44:39.335143343Z" level=warning msg="cleaning up after shim disconnected" id=4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7 namespace=k8s.io Nov 1 00:44:39.340758 env[1294]: time="2025-11-01T00:44:39.335177473Z" level=info msg="cleaning up dead shim" Nov 1 00:44:39.348825 env[1294]: time="2025-11-01T00:44:39.348734358Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:44:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Nov 1 00:44:39.421362 env[1294]: time="2025-11-01T00:44:39.421311152Z" level=info msg="StopPodSandbox for \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\"" Nov 1 00:44:39.422005 env[1294]: time="2025-11-01T00:44:39.421806671Z" level=info msg="Container to stop \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:44:39.504482 env[1294]: time="2025-11-01T00:44:39.504421470Z" level=info msg="shim disconnected" id=f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c Nov 1 00:44:39.505023 env[1294]: time="2025-11-01T00:44:39.504987102Z" level=warning msg="cleaning up after shim disconnected" id=f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c namespace=k8s.io Nov 1 00:44:39.505546 env[1294]: time="2025-11-01T00:44:39.505518930Z" level=info msg="cleaning up dead shim" Nov 1 00:44:39.529445 env[1294]: time="2025-11-01T00:44:39.529388898Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" Nov 1 00:44:39.530041 env[1294]: time="2025-11-01T00:44:39.530011293Z" level=info msg="TearDown network for sandbox \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\" successfully" Nov 1 00:44:39.530188 env[1294]: time="2025-11-01T00:44:39.530168448Z" level=info msg="StopPodSandbox for \"f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c\" returns successfully" Nov 1 00:44:39.671190 kubelet[2047]: I1101 00:44:39.670675 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-hubble-tls\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671559 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-ipsec-secrets\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671593 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-bpf-maps\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671612 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-cgroup\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671632 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-run\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671646 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cni-path\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.671993 kubelet[2047]: I1101 00:44:39.671660 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-hostproc\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671677 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-etc-cni-netd\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671696 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-net\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671714 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stgd5\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-kube-api-access-stgd5\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671730 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-clustermesh-secrets\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671746 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-xtables-lock\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672404 kubelet[2047]: I1101 00:44:39.671765 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-config-path\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672703 kubelet[2047]: I1101 00:44:39.671780 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-kernel\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672703 kubelet[2047]: I1101 00:44:39.671797 2047 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-lib-modules\") pod \"b4b44c11-a283-4096-bacb-9eb2076ccee6\" (UID: \"b4b44c11-a283-4096-bacb-9eb2076ccee6\") " Nov 1 00:44:39.672703 kubelet[2047]: I1101 00:44:39.671870 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673094 kubelet[2047]: I1101 00:44:39.672911 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673094 kubelet[2047]: I1101 00:44:39.672970 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673094 kubelet[2047]: I1101 00:44:39.673001 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673094 kubelet[2047]: I1101 00:44:39.673024 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673094 kubelet[2047]: I1101 00:44:39.673046 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.673659 kubelet[2047]: I1101 00:44:39.673068 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.674305 kubelet[2047]: I1101 00:44:39.674271 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.676903 kubelet[2047]: I1101 00:44:39.676856 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:39.677054 kubelet[2047]: I1101 00:44:39.676916 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.677054 kubelet[2047]: I1101 00:44:39.676937 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:44:39.678592 kubelet[2047]: I1101 00:44:39.678551 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:39.680304 kubelet[2047]: I1101 00:44:39.680267 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:39.682412 kubelet[2047]: I1101 00:44:39.682374 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:39.686318 kubelet[2047]: I1101 00:44:39.686262 2047 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-kube-api-access-stgd5" (OuterVolumeSpecName: "kube-api-access-stgd5") pod "b4b44c11-a283-4096-bacb-9eb2076ccee6" (UID: "b4b44c11-a283-4096-bacb-9eb2076ccee6"). InnerVolumeSpecName "kube-api-access-stgd5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:39.772818 kubelet[2047]: I1101 00:44:39.772766 2047 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-lib-modules\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773122 kubelet[2047]: I1101 00:44:39.773067 2047 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-hubble-tls\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773225 kubelet[2047]: I1101 00:44:39.773212 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773323 kubelet[2047]: I1101 00:44:39.773308 2047 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-bpf-maps\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773412 kubelet[2047]: I1101 00:44:39.773400 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-cgroup\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773571 kubelet[2047]: I1101 00:44:39.773486 2047 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cni-path\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773679 kubelet[2047]: I1101 00:44:39.773668 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-run\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773772 kubelet[2047]: I1101 00:44:39.773758 2047 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-hostproc\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773857 kubelet[2047]: I1101 00:44:39.773845 2047 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-etc-cni-netd\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.773970 kubelet[2047]: I1101 00:44:39.773954 2047 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-stgd5\" (UniqueName: \"kubernetes.io/projected/b4b44c11-a283-4096-bacb-9eb2076ccee6-kube-api-access-stgd5\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.774100 kubelet[2047]: I1101 00:44:39.774083 2047 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4b44c11-a283-4096-bacb-9eb2076ccee6-clustermesh-secrets\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.774213 kubelet[2047]: I1101 00:44:39.774196 2047 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-net\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.774343 kubelet[2047]: I1101 00:44:39.774326 2047 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-xtables-lock\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.774439 kubelet[2047]: I1101 00:44:39.774425 2047 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4b44c11-a283-4096-bacb-9eb2076ccee6-cilium-config-path\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.774532 kubelet[2047]: I1101 00:44:39.774520 2047 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4b44c11-a283-4096-bacb-9eb2076ccee6-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-14edb40b39\" DevicePath \"\"" Nov 1 00:44:39.952732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f620903594ae5ac687ebffe9a0e4b4d058d234410205fe4bae9d4e5df4b4ca5c-shm.mount: Deactivated successfully. Nov 1 00:44:39.952928 systemd[1]: var-lib-kubelet-pods-b4b44c11\x2da283\x2d4096\x2dbacb\x2d9eb2076ccee6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstgd5.mount: Deactivated successfully. Nov 1 00:44:39.953030 systemd[1]: var-lib-kubelet-pods-b4b44c11\x2da283\x2d4096\x2dbacb\x2d9eb2076ccee6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:44:39.953151 systemd[1]: var-lib-kubelet-pods-b4b44c11\x2da283\x2d4096\x2dbacb\x2d9eb2076ccee6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:39.953259 systemd[1]: var-lib-kubelet-pods-b4b44c11\x2da283\x2d4096\x2dbacb\x2d9eb2076ccee6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:44:40.423794 kubelet[2047]: I1101 00:44:40.423762 2047 scope.go:117] "RemoveContainer" containerID="4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7" Nov 1 00:44:40.428581 env[1294]: time="2025-11-01T00:44:40.428242067Z" level=info msg="RemoveContainer for \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\"" Nov 1 00:44:40.431229 env[1294]: time="2025-11-01T00:44:40.431023429Z" level=info msg="RemoveContainer for \"4e4d447fdad7c7ee67d14362a3a0ffa4ae22e7ffce441902fefe4f76313f6bd7\" returns successfully" Nov 1 00:44:40.480194 kubelet[2047]: I1101 00:44:40.480141 2047 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4b44c11-a283-4096-bacb-9eb2076ccee6" containerName="mount-cgroup" Nov 1 00:44:40.581208 kubelet[2047]: I1101 00:44:40.581145 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-cilium-run\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.581630 kubelet[2047]: I1101 00:44:40.581500 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-lib-modules\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.581822 kubelet[2047]: I1101 00:44:40.581804 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e4eb238-414f-4e8e-a593-8301e407dddf-clustermesh-secrets\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.581934 kubelet[2047]: I1101 00:44:40.581920 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvht8\" (UniqueName: \"kubernetes.io/projected/7e4eb238-414f-4e8e-a593-8301e407dddf-kube-api-access-gvht8\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582040 kubelet[2047]: I1101 00:44:40.582026 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-bpf-maps\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582431 kubelet[2047]: I1101 00:44:40.582166 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-host-proc-sys-kernel\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582431 kubelet[2047]: I1101 00:44:40.582224 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e4eb238-414f-4e8e-a593-8301e407dddf-cilium-ipsec-secrets\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582431 kubelet[2047]: I1101 00:44:40.582251 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e4eb238-414f-4e8e-a593-8301e407dddf-hubble-tls\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582431 kubelet[2047]: I1101 00:44:40.582275 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e4eb238-414f-4e8e-a593-8301e407dddf-cilium-config-path\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582575 kubelet[2047]: I1101 00:44:40.582466 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-host-proc-sys-net\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582575 kubelet[2047]: I1101 00:44:40.582509 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-hostproc\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582575 kubelet[2047]: I1101 00:44:40.582532 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-cilium-cgroup\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582575 kubelet[2047]: I1101 00:44:40.582551 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-cni-path\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582575 kubelet[2047]: I1101 00:44:40.582567 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-etc-cni-netd\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.582749 kubelet[2047]: I1101 00:44:40.582582 2047 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e4eb238-414f-4e8e-a593-8301e407dddf-xtables-lock\") pod \"cilium-6xgrk\" (UID: \"7e4eb238-414f-4e8e-a593-8301e407dddf\") " pod="kube-system/cilium-6xgrk" Nov 1 00:44:40.776100 kubelet[2047]: I1101 00:44:40.775943 2047 setters.go:602] "Node became not ready" node="ci-3510.3.8-n-14edb40b39" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:44:40Z","lastTransitionTime":"2025-11-01T00:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:44:40.783806 kubelet[2047]: E1101 00:44:40.783747 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:40.784764 env[1294]: time="2025-11-01T00:44:40.784702859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xgrk,Uid:7e4eb238-414f-4e8e-a593-8301e407dddf,Namespace:kube-system,Attempt:0,}" Nov 1 00:44:40.803209 env[1294]: time="2025-11-01T00:44:40.803126130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:40.803419 env[1294]: time="2025-11-01T00:44:40.803254637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:40.803419 env[1294]: time="2025-11-01T00:44:40.803299723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:40.803527 env[1294]: time="2025-11-01T00:44:40.803456928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58 pid=4033 runtime=io.containerd.runc.v2 Nov 1 00:44:40.851773 env[1294]: time="2025-11-01T00:44:40.851731413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xgrk,Uid:7e4eb238-414f-4e8e-a593-8301e407dddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\"" Nov 1 00:44:40.854279 kubelet[2047]: E1101 00:44:40.852864 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:40.860597 env[1294]: time="2025-11-01T00:44:40.860547955Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:44:40.868716 env[1294]: time="2025-11-01T00:44:40.868667865Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45\"" Nov 1 00:44:40.871378 env[1294]: time="2025-11-01T00:44:40.870411462Z" level=info msg="StartContainer for \"0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45\"" Nov 1 00:44:40.935666 env[1294]: time="2025-11-01T00:44:40.934021371Z" level=info msg="StartContainer for \"0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45\" returns successfully" Nov 1 00:44:40.972446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45-rootfs.mount: Deactivated successfully. Nov 1 00:44:40.981615 env[1294]: time="2025-11-01T00:44:40.981513269Z" level=info msg="shim disconnected" id=0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45 Nov 1 00:44:40.981897 env[1294]: time="2025-11-01T00:44:40.981873433Z" level=warning msg="cleaning up after shim disconnected" id=0c79d6d7656f9690290f5f49644d179ff573856e2111730ae880e7dde5faae45 namespace=k8s.io Nov 1 00:44:40.981997 env[1294]: time="2025-11-01T00:44:40.981982848Z" level=info msg="cleaning up dead shim" Nov 1 00:44:40.991897 env[1294]: time="2025-11-01T00:44:40.991840907Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4116 runtime=io.containerd.runc.v2\n" Nov 1 00:44:41.428632 kubelet[2047]: E1101 00:44:41.428585 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:41.431734 env[1294]: time="2025-11-01T00:44:41.431688531Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:44:41.444392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336485001.mount: Deactivated successfully. Nov 1 00:44:41.465285 env[1294]: time="2025-11-01T00:44:41.465235739Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4caf4172c5365e0460710c974c30b72606b4583ca9d559e7943f0772e3c5c622\"" Nov 1 00:44:41.472452 env[1294]: time="2025-11-01T00:44:41.472391810Z" level=info msg="StartContainer for \"4caf4172c5365e0460710c974c30b72606b4583ca9d559e7943f0772e3c5c622\"" Nov 1 00:44:41.538873 env[1294]: time="2025-11-01T00:44:41.538819272Z" level=info msg="StartContainer for \"4caf4172c5365e0460710c974c30b72606b4583ca9d559e7943f0772e3c5c622\" returns successfully" Nov 1 00:44:41.562534 env[1294]: time="2025-11-01T00:44:41.562482688Z" level=info msg="shim disconnected" id=4caf4172c5365e0460710c974c30b72606b4583ca9d559e7943f0772e3c5c622 Nov 1 00:44:41.562952 env[1294]: time="2025-11-01T00:44:41.562926758Z" level=warning msg="cleaning up after shim disconnected" id=4caf4172c5365e0460710c974c30b72606b4583ca9d559e7943f0772e3c5c622 namespace=k8s.io Nov 1 00:44:41.563107 env[1294]: time="2025-11-01T00:44:41.563080520Z" level=info msg="cleaning up dead shim" Nov 1 00:44:41.573824 env[1294]: time="2025-11-01T00:44:41.573769790Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4176 runtime=io.containerd.runc.v2\n" Nov 1 00:44:41.952980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004410116.mount: Deactivated successfully. Nov 1 00:44:42.055335 kubelet[2047]: I1101 00:44:42.054875 2047 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b44c11-a283-4096-bacb-9eb2076ccee6" path="/var/lib/kubelet/pods/b4b44c11-a283-4096-bacb-9eb2076ccee6/volumes" Nov 1 00:44:42.433195 kubelet[2047]: E1101 00:44:42.433157 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:42.438065 env[1294]: time="2025-11-01T00:44:42.438024826Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:44:42.459409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759089222.mount: Deactivated successfully. Nov 1 00:44:42.471887 env[1294]: time="2025-11-01T00:44:42.471818783Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60dc04916ccc9307cea475f9e291bee4400fc9191a70f93cb989a1cdc16565b2\"" Nov 1 00:44:42.472433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220905041.mount: Deactivated successfully. Nov 1 00:44:42.475050 env[1294]: time="2025-11-01T00:44:42.475015643Z" level=info msg="StartContainer for \"60dc04916ccc9307cea475f9e291bee4400fc9191a70f93cb989a1cdc16565b2\"" Nov 1 00:44:42.541880 env[1294]: time="2025-11-01T00:44:42.541824587Z" level=info msg="StartContainer for \"60dc04916ccc9307cea475f9e291bee4400fc9191a70f93cb989a1cdc16565b2\" returns successfully" Nov 1 00:44:42.577006 env[1294]: time="2025-11-01T00:44:42.576960061Z" level=info msg="shim disconnected" id=60dc04916ccc9307cea475f9e291bee4400fc9191a70f93cb989a1cdc16565b2 Nov 1 00:44:42.577373 env[1294]: time="2025-11-01T00:44:42.577349524Z" level=warning msg="cleaning up after shim disconnected" id=60dc04916ccc9307cea475f9e291bee4400fc9191a70f93cb989a1cdc16565b2 namespace=k8s.io Nov 1 00:44:42.577473 env[1294]: time="2025-11-01T00:44:42.577458017Z" level=info msg="cleaning up dead shim" Nov 1 00:44:42.586819 env[1294]: time="2025-11-01T00:44:42.586774090Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4233 runtime=io.containerd.runc.v2\n" Nov 1 00:44:43.165523 kubelet[2047]: E1101 00:44:43.165473 2047 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:44:43.436997 kubelet[2047]: E1101 00:44:43.436897 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:43.440882 env[1294]: time="2025-11-01T00:44:43.440841994Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:44:43.457209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857754949.mount: Deactivated successfully. Nov 1 00:44:43.469248 env[1294]: time="2025-11-01T00:44:43.469202151Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f74297b4ed17c932999c14b30f14278a04e2a6c3d249b8ec1c22fac6d25513e3\"" Nov 1 00:44:43.470663 env[1294]: time="2025-11-01T00:44:43.470629342Z" level=info msg="StartContainer for \"f74297b4ed17c932999c14b30f14278a04e2a6c3d249b8ec1c22fac6d25513e3\"" Nov 1 00:44:43.475790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013044715.mount: Deactivated successfully. Nov 1 00:44:43.531115 env[1294]: time="2025-11-01T00:44:43.531043643Z" level=info msg="StartContainer for \"f74297b4ed17c932999c14b30f14278a04e2a6c3d249b8ec1c22fac6d25513e3\" returns successfully" Nov 1 00:44:43.553507 env[1294]: time="2025-11-01T00:44:43.553452823Z" level=info msg="shim disconnected" id=f74297b4ed17c932999c14b30f14278a04e2a6c3d249b8ec1c22fac6d25513e3 Nov 1 00:44:43.553949 env[1294]: time="2025-11-01T00:44:43.553923694Z" level=warning msg="cleaning up after shim disconnected" id=f74297b4ed17c932999c14b30f14278a04e2a6c3d249b8ec1c22fac6d25513e3 namespace=k8s.io Nov 1 00:44:43.554037 env[1294]: time="2025-11-01T00:44:43.554022925Z" level=info msg="cleaning up dead shim" Nov 1 00:44:43.564243 env[1294]: time="2025-11-01T00:44:43.564181768Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:44:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4292 runtime=io.containerd.runc.v2\n" Nov 1 00:44:44.442260 kubelet[2047]: E1101 00:44:44.442224 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:44.447766 env[1294]: time="2025-11-01T00:44:44.447715389Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:44:44.469108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097509749.mount: Deactivated successfully. Nov 1 00:44:44.476386 env[1294]: time="2025-11-01T00:44:44.476330440Z" level=info msg="CreateContainer within sandbox \"bf0015494ec47567002929cc82e2fb0a7610ae2cdd28124cad9a8272c10b2e58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cbb4e1d3b570fc1a444553443c38dfa1ddfbe07868bfae93e1156865aecfdf38\"" Nov 1 00:44:44.477134 env[1294]: time="2025-11-01T00:44:44.477106711Z" level=info msg="StartContainer for \"cbb4e1d3b570fc1a444553443c38dfa1ddfbe07868bfae93e1156865aecfdf38\"" Nov 1 00:44:44.554878 env[1294]: time="2025-11-01T00:44:44.554818811Z" level=info msg="StartContainer for \"cbb4e1d3b570fc1a444553443c38dfa1ddfbe07868bfae93e1156865aecfdf38\" returns successfully" Nov 1 00:44:44.973098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:44:45.450883 kubelet[2047]: E1101 00:44:45.450844 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:46.785157 kubelet[2047]: E1101 00:44:46.785117 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:47.851321 update_engine[1287]: I1101 00:44:47.851202 1287 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:44:47.851321 update_engine[1287]: I1101 00:44:47.851259 1287 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:44:47.854599 update_engine[1287]: I1101 00:44:47.854133 1287 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:44:47.855181 update_engine[1287]: I1101 00:44:47.855067 1287 omaha_request_params.cc:62] Current group set to lts Nov 1 00:44:47.859406 update_engine[1287]: I1101 00:44:47.858749 1287 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:44:47.859406 update_engine[1287]: I1101 00:44:47.858777 1287 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:44:47.859406 update_engine[1287]: I1101 00:44:47.858804 1287 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:44:47.862264 update_engine[1287]: I1101 00:44:47.861981 1287 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:44:47.862264 update_engine[1287]: I1101 00:44:47.862169 1287 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 00:44:47.862264 update_engine[1287]: I1101 00:44:47.862182 1287 omaha_request_action.cc:271] Request: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: Nov 1 00:44:47.862264 update_engine[1287]: I1101 00:44:47.862191 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:44:47.871013 update_engine[1287]: I1101 00:44:47.870661 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:44:47.871013 update_engine[1287]: E1101 00:44:47.870841 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:44:47.871013 update_engine[1287]: I1101 00:44:47.870957 1287 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:44:47.882405 locksmithd[1335]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 00:44:48.314215 systemd-networkd[1060]: lxc_health: Link UP Nov 1 00:44:48.317105 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:44:48.317498 systemd-networkd[1060]: lxc_health: Gained carrier Nov 1 00:44:48.785611 kubelet[2047]: E1101 00:44:48.785570 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:48.818681 kubelet[2047]: I1101 00:44:48.816336 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6xgrk" podStartSLOduration=8.816315847 podStartE2EDuration="8.816315847s" podCreationTimestamp="2025-11-01 00:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:45.472889464 +0000 UTC m=+127.676644215" watchObservedRunningTime="2025-11-01 00:44:48.816315847 +0000 UTC m=+131.020070600" Nov 1 00:44:49.459748 kubelet[2047]: E1101 00:44:49.459707 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:50.263305 systemd-networkd[1060]: lxc_health: Gained IPv6LL Nov 1 00:44:50.461730 kubelet[2047]: E1101 00:44:50.461695 2047 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:44:54.352223 sshd[3864]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:54.357677 systemd[1]: sshd@28-64.23.181.132:22-139.178.89.65:58188.service: Deactivated successfully. Nov 1 00:44:54.358692 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:44:54.359704 systemd-logind[1286]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:44:54.360656 systemd-logind[1286]: Removed session 29. Nov 1 00:44:57.844864 update_engine[1287]: I1101 00:44:57.844394 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:44:57.844864 update_engine[1287]: I1101 00:44:57.844664 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:44:57.844864 update_engine[1287]: E1101 00:44:57.844750 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:44:57.844864 update_engine[1287]: I1101 00:44:57.844826 1287 libcurl_http_fetcher.cc:283] No HTTP response, retry 2