Mar 17 18:42:16.093127 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:42:16.093156 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:16.093170 kernel: BIOS-provided physical RAM map: Mar 17 18:42:16.093177 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:42:16.093183 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:42:16.093189 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:42:16.093197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 18:42:16.093204 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 18:42:16.093214 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:42:16.093220 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:42:16.093227 kernel: NX (Execute Disable) protection: active Mar 17 18:42:16.093233 kernel: SMBIOS 2.8 present. Mar 17 18:42:16.093240 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 18:42:16.093246 kernel: Hypervisor detected: KVM Mar 17 18:42:16.093255 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:42:16.093265 kernel: kvm-clock: cpu 0, msr 6a19a001, primary cpu clock Mar 17 18:42:16.093273 kernel: kvm-clock: using sched offset of 3671168723 cycles Mar 17 18:42:16.093281 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:42:16.093288 kernel: tsc: Detected 2494.138 MHz processor Mar 17 18:42:16.093296 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:42:16.093304 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:42:16.093311 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 18:42:16.093318 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:42:16.093329 kernel: ACPI: Early table checksum verification disabled Mar 17 18:42:16.093336 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 18:42:16.093343 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093350 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093358 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093365 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 18:42:16.093372 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093382 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093392 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093408 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:16.093419 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 18:42:16.093430 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 18:42:16.093441 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 18:42:16.093452 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 18:42:16.093463 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 18:42:16.093474 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 18:42:16.093485 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 18:42:16.093504 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:42:16.093519 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:42:16.093530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:42:16.093542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 18:42:16.093553 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 18:42:16.093565 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 18:42:16.093581 kernel: Zone ranges: Mar 17 18:42:16.093592 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:42:16.093605 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 18:42:16.093617 kernel: Normal empty Mar 17 18:42:16.093625 kernel: Movable zone start for each node Mar 17 18:42:16.093633 kernel: Early memory node ranges Mar 17 18:42:16.093642 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:42:16.093650 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 18:42:16.093658 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 18:42:16.093671 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:42:16.093683 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:42:16.093692 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 18:42:16.093700 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:42:16.093707 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:42:16.093716 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:42:16.093724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:42:16.093732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:42:16.093740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:42:16.093752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:42:16.093760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:42:16.093768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:42:16.093776 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:42:16.093784 kernel: TSC deadline timer available Mar 17 18:42:16.093792 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:42:16.093800 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 18:42:16.093808 kernel: Booting paravirtualized kernel on KVM Mar 17 18:42:16.093816 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:42:16.093828 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:42:16.093836 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:42:16.093845 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:42:16.097084 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:42:16.097104 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Mar 17 18:42:16.097116 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 18:42:16.097125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 18:42:16.097134 kernel: Policy zone: DMA32 Mar 17 18:42:16.097144 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:16.097167 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:42:16.097182 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:42:16.097194 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:42:16.097205 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:42:16.097217 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) Mar 17 18:42:16.097231 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:42:16.097239 kernel: Kernel/User page tables isolation: enabled Mar 17 18:42:16.097248 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:42:16.097260 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:42:16.097268 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:42:16.097278 kernel: rcu: RCU event tracing is enabled. Mar 17 18:42:16.097286 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:42:16.097295 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:42:16.097303 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:42:16.097311 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:42:16.097319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:42:16.097327 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:42:16.097338 kernel: random: crng init done Mar 17 18:42:16.097347 kernel: Console: colour VGA+ 80x25 Mar 17 18:42:16.097354 kernel: printk: console [tty0] enabled Mar 17 18:42:16.097363 kernel: printk: console [ttyS0] enabled Mar 17 18:42:16.097371 kernel: ACPI: Core revision 20210730 Mar 17 18:42:16.097379 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:42:16.097388 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:42:16.097396 kernel: x2apic enabled Mar 17 18:42:16.097406 kernel: Switched APIC routing to physical x2apic. Mar 17 18:42:16.097419 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:42:16.097435 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Mar 17 18:42:16.097448 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Mar 17 18:42:16.097464 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 18:42:16.097472 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 18:42:16.097481 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:42:16.097489 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:42:16.097497 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:42:16.097505 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:42:16.097523 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 18:42:16.097550 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:42:16.097559 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:42:16.097570 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 18:42:16.097579 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:42:16.097588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:42:16.097607 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:42:16.097620 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:42:16.097632 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:42:16.097645 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:42:16.097658 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:42:16.097666 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:42:16.097675 kernel: LSM: Security Framework initializing Mar 17 18:42:16.097684 kernel: SELinux: Initializing. Mar 17 18:42:16.097693 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:42:16.097701 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:42:16.097710 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 18:42:16.097721 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 18:42:16.097730 kernel: signal: max sigframe size: 1776 Mar 17 18:42:16.097738 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:42:16.097747 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:42:16.097756 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:42:16.097764 kernel: x86: Booting SMP configuration: Mar 17 18:42:16.097773 kernel: .... node #0, CPUs: #1 Mar 17 18:42:16.097781 kernel: kvm-clock: cpu 1, msr 6a19a041, secondary cpu clock Mar 17 18:42:16.097790 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Mar 17 18:42:16.097802 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:42:16.097811 kernel: smpboot: Max logical packages: 1 Mar 17 18:42:16.097820 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Mar 17 18:42:16.097828 kernel: devtmpfs: initialized Mar 17 18:42:16.097836 kernel: x86/mm: Memory block size: 128MB Mar 17 18:42:16.097845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:42:16.097869 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:42:16.097878 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:42:16.097887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:42:16.097899 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:42:16.097931 kernel: audit: type=2000 audit(1742236935.993:1): state=initialized audit_enabled=0 res=1 Mar 17 18:42:16.097944 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:42:16.097956 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:42:16.097969 kernel: cpuidle: using governor menu Mar 17 18:42:16.097977 kernel: ACPI: bus type PCI registered Mar 17 18:42:16.097986 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:42:16.097995 kernel: dca service started, version 1.12.1 Mar 17 18:42:16.098003 kernel: PCI: Using configuration type 1 for base access Mar 17 18:42:16.098016 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:42:16.098024 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:42:16.098033 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:42:16.098041 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:42:16.098050 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:42:16.098058 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:42:16.098067 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:42:16.098075 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:42:16.098087 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:42:16.098099 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:42:16.098107 kernel: ACPI: Interpreter enabled Mar 17 18:42:16.098116 kernel: ACPI: PM: (supports S0 S5) Mar 17 18:42:16.098124 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:42:16.098133 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:42:16.098142 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 18:42:16.098150 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:42:16.098421 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:42:16.098528 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 18:42:16.098540 kernel: acpiphp: Slot [3] registered Mar 17 18:42:16.098548 kernel: acpiphp: Slot [4] registered Mar 17 18:42:16.098557 kernel: acpiphp: Slot [5] registered Mar 17 18:42:16.098565 kernel: acpiphp: Slot [6] registered Mar 17 18:42:16.098574 kernel: acpiphp: Slot [7] registered Mar 17 18:42:16.098582 kernel: acpiphp: Slot [8] registered Mar 17 18:42:16.098591 kernel: acpiphp: Slot [9] registered Mar 17 18:42:16.098599 kernel: acpiphp: Slot [10] registered Mar 17 18:42:16.098612 kernel: acpiphp: Slot [11] registered Mar 17 18:42:16.098620 kernel: acpiphp: Slot [12] registered Mar 17 18:42:16.098628 kernel: acpiphp: Slot [13] registered Mar 17 18:42:16.098637 kernel: acpiphp: Slot [14] registered Mar 17 18:42:16.098645 kernel: acpiphp: Slot [15] registered Mar 17 18:42:16.098654 kernel: acpiphp: Slot [16] registered Mar 17 18:42:16.098662 kernel: acpiphp: Slot [17] registered Mar 17 18:42:16.098671 kernel: acpiphp: Slot [18] registered Mar 17 18:42:16.098679 kernel: acpiphp: Slot [19] registered Mar 17 18:42:16.098691 kernel: acpiphp: Slot [20] registered Mar 17 18:42:16.098699 kernel: acpiphp: Slot [21] registered Mar 17 18:42:16.098708 kernel: acpiphp: Slot [22] registered Mar 17 18:42:16.098716 kernel: acpiphp: Slot [23] registered Mar 17 18:42:16.098727 kernel: acpiphp: Slot [24] registered Mar 17 18:42:16.098741 kernel: acpiphp: Slot [25] registered Mar 17 18:42:16.098753 kernel: acpiphp: Slot [26] registered Mar 17 18:42:16.098764 kernel: acpiphp: Slot [27] registered Mar 17 18:42:16.098820 kernel: acpiphp: Slot [28] registered Mar 17 18:42:16.098829 kernel: acpiphp: Slot [29] registered Mar 17 18:42:16.098843 kernel: acpiphp: Slot [30] registered Mar 17 18:42:16.098865 kernel: acpiphp: Slot [31] registered Mar 17 18:42:16.098874 kernel: PCI host bridge to bus 0000:00 Mar 17 18:42:16.099009 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:42:16.099138 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:42:16.099248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:42:16.099332 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 18:42:16.099455 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 18:42:16.099542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:42:16.099685 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:42:16.099790 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 18:42:16.099914 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 18:42:16.100036 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 18:42:16.100173 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 18:42:16.100266 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 18:42:16.100356 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 18:42:16.100445 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 18:42:16.100609 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 18:42:16.100704 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 18:42:16.100810 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:42:16.100941 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 18:42:16.101089 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 18:42:16.102425 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 18:42:16.102554 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 18:42:16.102696 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 18:42:16.102798 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 18:42:16.102958 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 18:42:16.103073 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:42:16.103188 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:42:16.103287 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 18:42:16.103389 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 18:42:16.103502 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 18:42:16.103633 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:42:16.103761 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 18:42:16.112953 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 18:42:16.113198 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 18:42:16.113317 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 18:42:16.113432 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 18:42:16.113522 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 18:42:16.113611 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 18:42:16.113746 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:42:16.113837 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:42:16.113949 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 18:42:16.114036 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 18:42:16.114140 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:42:16.114229 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 18:42:16.114318 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 18:42:16.114410 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 18:42:16.114542 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 18:42:16.114637 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 18:42:16.114736 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 18:42:16.114753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:42:16.114775 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:42:16.114789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:42:16.114808 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:42:16.114821 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:42:16.114835 kernel: iommu: Default domain type: Translated Mar 17 18:42:16.114863 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:42:16.114992 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 18:42:16.115089 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:42:16.115184 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 18:42:16.115195 kernel: vgaarb: loaded Mar 17 18:42:16.115205 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:42:16.115221 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:42:16.115230 kernel: PTP clock support registered Mar 17 18:42:16.115239 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:42:16.115248 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:42:16.115258 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:42:16.115268 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 18:42:16.115277 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:42:16.115285 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:42:16.115298 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:42:16.115307 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:42:16.115317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:42:16.115326 kernel: pnp: PnP ACPI init Mar 17 18:42:16.115336 kernel: pnp: PnP ACPI: found 4 devices Mar 17 18:42:16.115345 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:42:16.115354 kernel: NET: Registered PF_INET protocol family Mar 17 18:42:16.115363 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:42:16.115372 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 18:42:16.115385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:42:16.115394 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:42:16.115403 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 18:42:16.115412 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 18:42:16.115425 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:42:16.115440 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:42:16.115453 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:42:16.115467 kernel: NET: Registered PF_XDP protocol family Mar 17 18:42:16.115575 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:42:16.115665 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:42:16.115747 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:42:16.115847 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 18:42:16.115999 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 18:42:16.116104 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 18:42:16.116203 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:42:16.116296 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 18:42:16.116309 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 18:42:16.116411 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 40794 usecs Mar 17 18:42:16.116423 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:42:16.116432 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:42:16.116441 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Mar 17 18:42:16.116451 kernel: Initialise system trusted keyrings Mar 17 18:42:16.116460 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 18:42:16.116469 kernel: Key type asymmetric registered Mar 17 18:42:16.116477 kernel: Asymmetric key parser 'x509' registered Mar 17 18:42:16.116486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:42:16.116501 kernel: io scheduler mq-deadline registered Mar 17 18:42:16.116516 kernel: io scheduler kyber registered Mar 17 18:42:16.116528 kernel: io scheduler bfq registered Mar 17 18:42:16.116541 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:42:16.116553 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 18:42:16.116566 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:42:16.116577 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:42:16.116587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:42:16.116595 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:42:16.116609 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:42:16.116618 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:42:16.116627 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:42:16.116635 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:42:16.116779 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 18:42:16.117084 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 18:42:16.117177 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T18:42:15 UTC (1742236935) Mar 17 18:42:16.117263 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 18:42:16.117274 kernel: intel_pstate: CPU model not supported Mar 17 18:42:16.117283 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:42:16.117293 kernel: Segment Routing with IPv6 Mar 17 18:42:16.117302 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:42:16.117310 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:42:16.117319 kernel: Key type dns_resolver registered Mar 17 18:42:16.117328 kernel: IPI shorthand broadcast: enabled Mar 17 18:42:16.117366 kernel: sched_clock: Marking stable (721039350, 114098874)->(973642616, -138504392) Mar 17 18:42:16.117376 kernel: registered taskstats version 1 Mar 17 18:42:16.117389 kernel: Loading compiled-in X.509 certificates Mar 17 18:42:16.117398 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:42:16.117407 kernel: Key type .fscrypt registered Mar 17 18:42:16.117415 kernel: Key type fscrypt-provisioning registered Mar 17 18:42:16.117425 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:42:16.117433 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:42:16.117442 kernel: ima: No architecture policies found Mar 17 18:42:16.117451 kernel: clk: Disabling unused clocks Mar 17 18:42:16.117463 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:42:16.117472 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:42:16.117481 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:42:16.117489 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:42:16.117498 kernel: Run /init as init process Mar 17 18:42:16.117507 kernel: with arguments: Mar 17 18:42:16.117542 kernel: /init Mar 17 18:42:16.117555 kernel: with environment: Mar 17 18:42:16.117564 kernel: HOME=/ Mar 17 18:42:16.117576 kernel: TERM=linux Mar 17 18:42:16.117585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:42:16.117599 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:42:16.117618 systemd[1]: Detected virtualization kvm. Mar 17 18:42:16.117631 systemd[1]: Detected architecture x86-64. Mar 17 18:42:16.117644 systemd[1]: Running in initrd. Mar 17 18:42:16.117657 systemd[1]: No hostname configured, using default hostname. Mar 17 18:42:16.117671 systemd[1]: Hostname set to . Mar 17 18:42:16.117686 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:42:16.117696 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:42:16.117705 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:42:16.117715 systemd[1]: Reached target cryptsetup.target. Mar 17 18:42:16.117724 systemd[1]: Reached target paths.target. Mar 17 18:42:16.117734 systemd[1]: Reached target slices.target. Mar 17 18:42:16.117743 systemd[1]: Reached target swap.target. Mar 17 18:42:16.117752 systemd[1]: Reached target timers.target. Mar 17 18:42:16.117769 systemd[1]: Listening on iscsid.socket. Mar 17 18:42:16.117778 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:42:16.117787 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:42:16.117797 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:42:16.117806 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:42:16.117816 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:42:16.117826 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:42:16.117836 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:42:16.117861 systemd[1]: Reached target sockets.target. Mar 17 18:42:16.117871 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:42:16.117885 systemd[1]: Finished network-cleanup.service. Mar 17 18:42:16.117904 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:42:16.117914 systemd[1]: Starting systemd-journald.service... Mar 17 18:42:16.117929 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:42:16.117938 systemd[1]: Starting systemd-resolved.service... Mar 17 18:42:16.117948 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:42:16.117957 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:42:16.117967 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:42:16.117985 systemd-journald[183]: Journal started Mar 17 18:42:16.118078 systemd-journald[183]: Runtime Journal (/run/log/journal/1cbf670845ee4108a0a8769d1f971119) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:42:16.104086 systemd-modules-load[184]: Inserted module 'overlay' Mar 17 18:42:16.140101 systemd[1]: Started systemd-journald.service. Mar 17 18:42:16.140137 kernel: audit: type=1130 audit(1742236936.133:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.124502 systemd-resolved[185]: Positive Trust Anchors: Mar 17 18:42:16.124518 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:42:16.124554 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:42:16.149060 kernel: audit: type=1130 audit(1742236936.143:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.127712 systemd-resolved[185]: Defaulting to hostname 'linux'. Mar 17 18:42:16.144277 systemd[1]: Started systemd-resolved.service. Mar 17 18:42:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.156011 kernel: audit: type=1130 audit(1742236936.143:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.144812 systemd[1]: Reached target nss-lookup.target. Mar 17 18:42:16.148422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:42:16.152167 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:42:16.157586 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:42:16.161948 kernel: audit: type=1130 audit(1742236936.154:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.166478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:42:16.162137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:42:16.173639 kernel: Bridge firewalling registered Mar 17 18:42:16.171467 systemd-modules-load[184]: Inserted module 'br_netfilter' Mar 17 18:42:16.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.177966 kernel: audit: type=1130 audit(1742236936.174:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.184744 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:42:16.188303 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:42:16.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.193893 kernel: audit: type=1130 audit(1742236936.185:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.197886 kernel: SCSI subsystem initialized Mar 17 18:42:16.209274 dracut-cmdline[202]: dracut-dracut-053 Mar 17 18:42:16.212643 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:16.214425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:42:16.214479 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:42:16.221879 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:42:16.225314 systemd-modules-load[184]: Inserted module 'dm_multipath' Mar 17 18:42:16.226900 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:42:16.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.231668 kernel: audit: type=1130 audit(1742236936.226:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.230749 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:16.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.246946 kernel: audit: type=1130 audit(1742236936.243:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.243486 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:16.318897 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:42:16.340902 kernel: iscsi: registered transport (tcp) Mar 17 18:42:16.369178 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:42:16.369285 kernel: QLogic iSCSI HBA Driver Mar 17 18:42:16.429449 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:42:16.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.431574 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:42:16.434675 kernel: audit: type=1130 audit(1742236936.429:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.502943 kernel: raid6: avx2x4 gen() 19267 MB/s Mar 17 18:42:16.519934 kernel: raid6: avx2x4 xor() 6783 MB/s Mar 17 18:42:16.537425 kernel: raid6: avx2x2 gen() 20167 MB/s Mar 17 18:42:16.553934 kernel: raid6: avx2x2 xor() 17607 MB/s Mar 17 18:42:16.570944 kernel: raid6: avx2x1 gen() 18493 MB/s Mar 17 18:42:16.587945 kernel: raid6: avx2x1 xor() 9470 MB/s Mar 17 18:42:16.605104 kernel: raid6: sse2x4 gen() 7918 MB/s Mar 17 18:42:16.621947 kernel: raid6: sse2x4 xor() 4604 MB/s Mar 17 18:42:16.638938 kernel: raid6: sse2x2 gen() 9854 MB/s Mar 17 18:42:16.656019 kernel: raid6: sse2x2 xor() 7380 MB/s Mar 17 18:42:16.673112 kernel: raid6: sse2x1 gen() 8172 MB/s Mar 17 18:42:16.690319 kernel: raid6: sse2x1 xor() 5572 MB/s Mar 17 18:42:16.690433 kernel: raid6: using algorithm avx2x2 gen() 20167 MB/s Mar 17 18:42:16.690451 kernel: raid6: .... xor() 17607 MB/s, rmw enabled Mar 17 18:42:16.690937 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:42:16.706901 kernel: xor: automatically using best checksumming function avx Mar 17 18:42:16.831896 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:42:16.847732 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:42:16.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.847000 audit: BPF prog-id=7 op=LOAD Mar 17 18:42:16.847000 audit: BPF prog-id=8 op=LOAD Mar 17 18:42:16.849570 systemd[1]: Starting systemd-udevd.service... Mar 17 18:42:16.867486 systemd-udevd[385]: Using default interface naming scheme 'v252'. Mar 17 18:42:16.875008 systemd[1]: Started systemd-udevd.service. Mar 17 18:42:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.877437 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:42:16.899077 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Mar 17 18:42:16.943998 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:42:16.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:16.945713 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:42:17.001625 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:42:17.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:17.071637 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 18:42:17.116656 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:42:17.116895 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:42:17.117018 kernel: GPT:9289727 != 125829119 Mar 17 18:42:17.117034 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:42:17.117071 kernel: GPT:9289727 != 125829119 Mar 17 18:42:17.117086 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:42:17.117100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:17.117116 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:42:17.122332 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Mar 17 18:42:17.158887 kernel: libata version 3.00 loaded. Mar 17 18:42:17.165926 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 18:42:17.221290 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Mar 17 18:42:17.221323 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:42:17.221341 kernel: AES CTR mode by8 optimization enabled Mar 17 18:42:17.221356 kernel: ACPI: bus type USB registered Mar 17 18:42:17.221371 kernel: usbcore: registered new interface driver usbfs Mar 17 18:42:17.221396 kernel: usbcore: registered new interface driver hub Mar 17 18:42:17.221411 kernel: usbcore: registered new device driver usb Mar 17 18:42:17.221427 kernel: scsi host1: ata_piix Mar 17 18:42:17.221594 kernel: scsi host2: ata_piix Mar 17 18:42:17.221767 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 18:42:17.221784 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 18:42:17.195002 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:42:17.284418 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Mar 17 18:42:17.195941 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:42:17.285128 disk-uuid[456]: Primary Header is updated. Mar 17 18:42:17.285128 disk-uuid[456]: Secondary Entries is updated. Mar 17 18:42:17.285128 disk-uuid[456]: Secondary Header is updated. Mar 17 18:42:17.198264 systemd[1]: Starting disk-uuid.service... Mar 17 18:42:17.209551 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:42:17.214527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:42:17.290784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:42:17.386884 kernel: ehci-pci: EHCI PCI platform driver Mar 17 18:42:17.392893 kernel: uhci_hcd: USB Universal Host Controller Interface driver Mar 17 18:42:17.411184 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 18:42:17.415292 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 18:42:17.415472 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 18:42:17.415572 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Mar 17 18:42:17.415713 kernel: hub 1-0:1.0: USB hub found Mar 17 18:42:17.415889 kernel: hub 1-0:1.0: 2 ports detected Mar 17 18:42:18.218924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:18.219628 disk-uuid[457]: The operation has completed successfully. Mar 17 18:42:18.284257 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:42:18.285309 systemd[1]: Finished disk-uuid.service. Mar 17 18:42:18.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.287901 systemd[1]: Starting verity-setup.service... Mar 17 18:42:18.316881 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:42:18.374207 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:42:18.377359 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:42:18.380314 systemd[1]: Finished verity-setup.service. Mar 17 18:42:18.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.473893 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:42:18.475218 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:42:18.476338 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:42:18.478011 systemd[1]: Starting ignition-setup.service... Mar 17 18:42:18.479152 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:42:18.495555 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:42:18.495647 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:42:18.495669 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:42:18.514087 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:42:18.523180 systemd[1]: Finished ignition-setup.service. Mar 17 18:42:18.524838 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:42:18.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.642294 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:42:18.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.643000 audit: BPF prog-id=9 op=LOAD Mar 17 18:42:18.645781 systemd[1]: Starting systemd-networkd.service... Mar 17 18:42:18.678374 systemd-networkd[690]: lo: Link UP Mar 17 18:42:18.679277 systemd-networkd[690]: lo: Gained carrier Mar 17 18:42:18.681122 systemd-networkd[690]: Enumeration completed Mar 17 18:42:18.681911 systemd[1]: Started systemd-networkd.service. Mar 17 18:42:18.682519 systemd-networkd[690]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:42:18.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.683138 systemd[1]: Reached target network.target. Mar 17 18:42:18.684349 systemd-networkd[690]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 18:42:18.687062 systemd-networkd[690]: eth1: Link UP Mar 17 18:42:18.687069 systemd-networkd[690]: eth1: Gained carrier Mar 17 18:42:18.689925 ignition[608]: Ignition 2.14.0 Mar 17 18:42:18.689937 ignition[608]: Stage: fetch-offline Mar 17 18:42:18.691152 systemd[1]: Starting iscsiuio.service... Mar 17 18:42:18.690046 ignition[608]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:18.690083 ignition[608]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:18.699281 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:18.699456 ignition[608]: parsed url from cmdline: "" Mar 17 18:42:18.699460 ignition[608]: no config URL provided Mar 17 18:42:18.699467 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:42:18.700785 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:42:18.699477 ignition[608]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:42:18.699485 ignition[608]: failed to fetch config: resource requires networking Mar 17 18:42:18.699614 ignition[608]: Ignition finished successfully Mar 17 18:42:18.704209 systemd-networkd[690]: eth0: Link UP Mar 17 18:42:18.704219 systemd-networkd[690]: eth0: Gained carrier Mar 17 18:42:18.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.715680 systemd[1]: Started iscsiuio.service. Mar 17 18:42:18.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.717652 systemd[1]: Starting ignition-fetch.service... Mar 17 18:42:18.721181 systemd[1]: Starting iscsid.service... Mar 17 18:42:18.726034 systemd-networkd[690]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Mar 17 18:42:18.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.731379 iscsid[696]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:42:18.731379 iscsid[696]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:42:18.731379 iscsid[696]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:42:18.731379 iscsid[696]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:42:18.731379 iscsid[696]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:42:18.731379 iscsid[696]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:42:18.731379 iscsid[696]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:42:18.729797 systemd[1]: Started iscsid.service. Mar 17 18:42:18.731526 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:42:18.734981 systemd-networkd[690]: eth0: DHCPv4 address 146.190.61.194/19, gateway 146.190.32.1 acquired from 169.254.169.253 Mar 17 18:42:18.747572 ignition[695]: Ignition 2.14.0 Mar 17 18:42:18.749095 ignition[695]: Stage: fetch Mar 17 18:42:18.749824 ignition[695]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:18.750959 ignition[695]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:18.753561 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:18.753721 ignition[695]: parsed url from cmdline: "" Mar 17 18:42:18.753727 ignition[695]: no config URL provided Mar 17 18:42:18.753735 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:42:18.753749 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:42:18.753796 ignition[695]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 18:42:18.761077 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:42:18.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.761758 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:42:18.762231 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:42:18.762833 systemd[1]: Reached target remote-fs.target. Mar 17 18:42:18.765234 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:42:18.776094 ignition[695]: GET result: OK Mar 17 18:42:18.776272 ignition[695]: parsing config with SHA512: 1ceb2e323654a802933f22d7dc52a5676a3f4fd056366240a8dbf7c1ba4aa0cc5202aae54436a0201761ee6ebab61cd5eaadb2a75ee9d88501dc91fe149e9f3b Mar 17 18:42:18.780070 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:42:18.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.790046 unknown[695]: fetched base config from "system" Mar 17 18:42:18.790073 unknown[695]: fetched base config from "system" Mar 17 18:42:18.790649 ignition[695]: fetch: fetch complete Mar 17 18:42:18.790080 unknown[695]: fetched user config from "digitalocean" Mar 17 18:42:18.790656 ignition[695]: fetch: fetch passed Mar 17 18:42:18.790727 ignition[695]: Ignition finished successfully Mar 17 18:42:18.794888 systemd[1]: Finished ignition-fetch.service. Mar 17 18:42:18.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.797365 systemd[1]: Starting ignition-kargs.service... Mar 17 18:42:18.810507 ignition[715]: Ignition 2.14.0 Mar 17 18:42:18.810523 ignition[715]: Stage: kargs Mar 17 18:42:18.810706 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:18.810739 ignition[715]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:18.813240 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:18.815756 ignition[715]: kargs: kargs passed Mar 17 18:42:18.817193 systemd[1]: Finished ignition-kargs.service. Mar 17 18:42:18.815870 ignition[715]: Ignition finished successfully Mar 17 18:42:18.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.819123 systemd[1]: Starting ignition-disks.service... Mar 17 18:42:18.833086 ignition[721]: Ignition 2.14.0 Mar 17 18:42:18.833095 ignition[721]: Stage: disks Mar 17 18:42:18.833267 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:18.833291 ignition[721]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:18.836158 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:18.838801 ignition[721]: disks: disks passed Mar 17 18:42:18.839974 systemd[1]: Finished ignition-disks.service. Mar 17 18:42:18.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.838930 ignition[721]: Ignition finished successfully Mar 17 18:42:18.840876 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:42:18.841480 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:42:18.842119 systemd[1]: Reached target local-fs.target. Mar 17 18:42:18.842784 systemd[1]: Reached target sysinit.target. Mar 17 18:42:18.843439 systemd[1]: Reached target basic.target. Mar 17 18:42:18.845594 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:42:18.867316 systemd-fsck[729]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:42:18.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:18.872025 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:42:18.874889 systemd[1]: Mounting sysroot.mount... Mar 17 18:42:18.891899 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:42:18.892408 systemd[1]: Mounted sysroot.mount. Mar 17 18:42:18.893220 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:42:18.895930 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:42:18.897981 systemd[1]: Starting flatcar-digitalocean-network.service... Mar 17 18:42:18.900805 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:42:18.901470 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:42:18.901526 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:42:18.907247 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:42:18.912140 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:42:18.925208 initrd-setup-root[741]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:42:18.940341 initrd-setup-root[749]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:42:18.948638 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:42:18.957795 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:42:19.044988 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:42:19.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.046336 coreos-metadata[735]: Mar 17 18:42:19.046 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:42:19.047098 systemd[1]: Starting ignition-mount.service... Mar 17 18:42:19.051937 systemd[1]: Starting sysroot-boot.service... Mar 17 18:42:19.068421 coreos-metadata[735]: Mar 17 18:42:19.068 INFO Fetch successful Mar 17 18:42:19.071570 bash[786]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:42:19.073172 coreos-metadata[736]: Mar 17 18:42:19.073 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:42:19.077116 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Mar 17 18:42:19.077279 systemd[1]: Finished flatcar-digitalocean-network.service. Mar 17 18:42:19.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.087884 coreos-metadata[736]: Mar 17 18:42:19.087 INFO Fetch successful Mar 17 18:42:19.095931 coreos-metadata[736]: Mar 17 18:42:19.095 INFO wrote hostname ci-3510.3.7-8-addee6c60b to /sysroot/etc/hostname Mar 17 18:42:19.098338 ignition[788]: INFO : Ignition 2.14.0 Mar 17 18:42:19.099027 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:42:19.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.100053 ignition[788]: INFO : Stage: mount Mar 17 18:42:19.100673 ignition[788]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:19.101343 ignition[788]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:19.103426 systemd[1]: Finished sysroot-boot.service. Mar 17 18:42:19.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.106719 ignition[788]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:19.109132 ignition[788]: INFO : mount: mount passed Mar 17 18:42:19.109757 ignition[788]: INFO : Ignition finished successfully Mar 17 18:42:19.111499 systemd[1]: Finished ignition-mount.service. Mar 17 18:42:19.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:19.400137 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:42:19.411945 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (797) Mar 17 18:42:19.414492 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:42:19.414609 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:42:19.414624 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:42:19.421080 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:42:19.429395 systemd[1]: Starting ignition-files.service... Mar 17 18:42:19.456620 ignition[817]: INFO : Ignition 2.14.0 Mar 17 18:42:19.456620 ignition[817]: INFO : Stage: files Mar 17 18:42:19.458146 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:19.458146 ignition[817]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:19.459752 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:19.460824 ignition[817]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:42:19.461678 ignition[817]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:42:19.461678 ignition[817]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:42:19.464241 ignition[817]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:42:19.465187 ignition[817]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:42:19.465922 ignition[817]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:42:19.465352 unknown[817]: wrote ssh authorized keys file for user: core Mar 17 18:42:19.468210 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:42:19.468210 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:42:19.509744 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:42:19.595374 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:42:19.596621 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:42:19.596621 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:42:19.932380 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:42:20.011475 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:42:20.012361 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:42:20.013422 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:42:20.014167 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:42:20.015142 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:42:20.015807 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:42:20.016624 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:42:20.017346 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:42:20.018242 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:42:20.019881 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:42:20.122319 systemd-networkd[690]: eth1: Gained IPv6LL Mar 17 18:42:20.314269 systemd-networkd[690]: eth0: Gained IPv6LL Mar 17 18:42:20.474984 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:42:20.789749 ignition[817]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:42:20.790821 ignition[817]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:42:20.791512 ignition[817]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:42:20.792138 ignition[817]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 17 18:42:20.793333 ignition[817]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:42:20.794466 ignition[817]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:42:20.794466 ignition[817]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 17 18:42:20.795899 ignition[817]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:42:20.795899 ignition[817]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:42:20.795899 ignition[817]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:42:20.795899 ignition[817]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:42:20.808664 ignition[817]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:42:20.808664 ignition[817]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:42:20.808664 ignition[817]: INFO : files: files passed Mar 17 18:42:20.808664 ignition[817]: INFO : Ignition finished successfully Mar 17 18:42:20.813382 systemd[1]: Finished ignition-files.service. Mar 17 18:42:20.822096 kernel: kauditd_printk_skb: 28 callbacks suppressed Mar 17 18:42:20.822134 kernel: audit: type=1130 audit(1742236940.813:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.815881 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:42:20.818625 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:42:20.820757 systemd[1]: Starting ignition-quench.service... Mar 17 18:42:20.827050 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:42:20.834738 kernel: audit: type=1130 audit(1742236940.827:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.834802 kernel: audit: type=1131 audit(1742236940.827:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.827206 systemd[1]: Finished ignition-quench.service. Mar 17 18:42:20.836589 initrd-setup-root-after-ignition[842]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:42:20.837696 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:42:20.841470 kernel: audit: type=1130 audit(1742236940.837:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.838435 systemd[1]: Reached target ignition-complete.target. Mar 17 18:42:20.842995 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:42:20.868680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:42:20.868817 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:42:20.869944 systemd[1]: Reached target initrd-fs.target. Mar 17 18:42:20.876184 kernel: audit: type=1130 audit(1742236940.868:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.876226 kernel: audit: type=1131 audit(1742236940.868:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.875716 systemd[1]: Reached target initrd.target. Mar 17 18:42:20.876498 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:42:20.878169 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:42:20.895599 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:42:20.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.898126 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:42:20.899892 kernel: audit: type=1130 audit(1742236940.895:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.912808 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:42:20.914245 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:42:20.915521 systemd[1]: Stopped target timers.target. Mar 17 18:42:20.916698 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:42:20.917575 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:42:20.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.924205 systemd[1]: Stopped target initrd.target. Mar 17 18:42:20.930449 kernel: audit: type=1131 audit(1742236940.917:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.930135 systemd[1]: Stopped target basic.target. Mar 17 18:42:20.930772 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:42:20.931427 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:42:20.932310 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:42:20.933211 systemd[1]: Stopped target remote-fs.target. Mar 17 18:42:20.933935 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:42:20.934689 systemd[1]: Stopped target sysinit.target. Mar 17 18:42:20.935433 systemd[1]: Stopped target local-fs.target. Mar 17 18:42:20.936188 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:42:20.937117 systemd[1]: Stopped target swap.target. Mar 17 18:42:20.937872 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:42:20.941875 kernel: audit: type=1131 audit(1742236940.937:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.938056 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:42:20.938770 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:42:20.942326 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:42:20.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.942564 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:42:20.947267 kernel: audit: type=1131 audit(1742236940.942:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.943725 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:42:20.944011 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:42:20.947961 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:42:20.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.948137 systemd[1]: Stopped ignition-files.service. Mar 17 18:42:20.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.949373 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:42:20.949542 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:42:20.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.951637 systemd[1]: Stopping ignition-mount.service... Mar 17 18:42:20.952417 iscsid[696]: iscsid shutting down. Mar 17 18:42:20.955549 systemd[1]: Stopping iscsid.service... Mar 17 18:42:20.962159 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:42:20.962930 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:42:20.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.963281 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:42:20.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.964118 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:42:20.964280 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:42:20.967166 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:42:20.967334 systemd[1]: Stopped iscsid.service. Mar 17 18:42:20.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.972296 systemd[1]: Stopping iscsiuio.service... Mar 17 18:42:20.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.973601 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:42:20.973733 systemd[1]: Stopped iscsiuio.service. Mar 17 18:42:20.974444 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:42:20.974568 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:42:20.979588 ignition[855]: INFO : Ignition 2.14.0 Mar 17 18:42:20.980309 ignition[855]: INFO : Stage: umount Mar 17 18:42:20.980959 ignition[855]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:42:20.981617 ignition[855]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:42:20.985008 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:42:20.989089 ignition[855]: INFO : umount: umount passed Mar 17 18:42:20.990967 ignition[855]: INFO : Ignition finished successfully Mar 17 18:42:20.992918 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:42:20.993073 systemd[1]: Stopped ignition-mount.service. Mar 17 18:42:20.994075 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:42:20.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.994138 systemd[1]: Stopped ignition-disks.service. Mar 17 18:42:20.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.994531 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:42:20.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.994569 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:42:20.995298 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:42:20.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:20.995362 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:42:20.996061 systemd[1]: Stopped target network.target. Mar 17 18:42:20.996680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:42:20.996738 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:42:20.997542 systemd[1]: Stopped target paths.target. Mar 17 18:42:20.998244 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:42:21.002329 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:42:21.018509 systemd[1]: Stopped target slices.target. Mar 17 18:42:21.018803 systemd[1]: Stopped target sockets.target. Mar 17 18:42:21.019203 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:42:21.019260 systemd[1]: Closed iscsid.socket. Mar 17 18:42:21.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.019566 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:42:21.019608 systemd[1]: Closed iscsiuio.socket. Mar 17 18:42:21.022556 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:42:21.022649 systemd[1]: Stopped ignition-setup.service. Mar 17 18:42:21.043618 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:42:21.051001 systemd-networkd[690]: eth1: DHCPv6 lease lost Mar 17 18:42:21.055100 systemd-networkd[690]: eth0: DHCPv6 lease lost Mar 17 18:42:21.057123 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:42:21.058912 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:42:21.059482 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:42:21.059589 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:42:21.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.061735 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:42:21.061826 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:42:21.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.062000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:42:21.062782 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:42:21.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.062824 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:42:21.064284 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:42:21.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.064434 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:42:21.066112 systemd[1]: Stopping network-cleanup.service... Mar 17 18:42:21.066885 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:42:21.066992 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:42:21.068455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:42:21.068529 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:42:21.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.069175 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:42:21.069244 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:42:21.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.069727 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:42:21.082000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:42:21.073648 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:42:21.074332 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:42:21.074443 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:42:21.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.081108 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:42:21.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.081320 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:42:21.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.083508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:42:21.083580 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:42:21.086339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:42:21.086387 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:42:21.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.087242 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:42:21.087318 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:42:21.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.088072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:42:21.088136 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:42:21.088733 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:42:21.088781 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:42:21.090800 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:42:21.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.091675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:42:21.091809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:42:21.092789 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:42:21.093140 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:42:21.100846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:42:21.101008 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:42:21.103103 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:42:21.103839 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:42:21.103979 systemd[1]: Stopped network-cleanup.service. Mar 17 18:42:21.108494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:42:21.108590 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:42:21.109609 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:42:21.111315 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:42:21.126720 systemd[1]: Switching root. Mar 17 18:42:21.146474 systemd-journald[183]: Journal stopped Mar 17 18:42:25.050031 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Mar 17 18:42:25.050167 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:42:25.050193 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:42:25.050206 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:42:25.050219 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:42:25.050230 kernel: SELinux: policy capability open_perms=1 Mar 17 18:42:25.050247 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:42:25.050259 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:42:25.050271 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:42:25.050291 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:42:25.050303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:42:25.050316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:42:25.050329 systemd[1]: Successfully loaded SELinux policy in 49.768ms. Mar 17 18:42:25.050349 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.243ms. Mar 17 18:42:25.050365 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:42:25.050378 systemd[1]: Detected virtualization kvm. Mar 17 18:42:25.050391 systemd[1]: Detected architecture x86-64. Mar 17 18:42:25.050404 systemd[1]: Detected first boot. Mar 17 18:42:25.050416 systemd[1]: Hostname set to . Mar 17 18:42:25.050430 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:42:25.050443 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:42:25.050456 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:42:25.050473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:25.050486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:25.050515 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:25.050530 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:42:25.050542 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:42:25.050554 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:42:25.050568 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:42:25.050581 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:42:25.050597 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:42:25.050610 systemd[1]: Created slice system-getty.slice. Mar 17 18:42:25.050623 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:42:25.050636 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:42:25.050649 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:42:25.050661 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:42:25.050675 systemd[1]: Created slice user.slice. Mar 17 18:42:25.050687 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:42:25.050699 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:42:25.050715 systemd[1]: Set up automount boot.automount. Mar 17 18:42:25.050729 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:42:25.050742 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:42:25.050755 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:42:25.050768 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:42:25.050780 systemd[1]: Reached target integritysetup.target. Mar 17 18:42:25.050808 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:42:25.050827 systemd[1]: Reached target remote-fs.target. Mar 17 18:42:25.050841 systemd[1]: Reached target slices.target. Mar 17 18:42:25.050867 systemd[1]: Reached target swap.target. Mar 17 18:42:25.050883 systemd[1]: Reached target torcx.target. Mar 17 18:42:25.050904 systemd[1]: Reached target veritysetup.target. Mar 17 18:42:25.050924 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:42:25.050943 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:42:25.050962 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:42:25.050981 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:42:25.051004 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:42:25.051018 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:42:25.051032 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:42:25.051044 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:42:25.051062 systemd[1]: Mounting media.mount... Mar 17 18:42:25.051079 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:25.051092 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:42:25.051105 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:42:25.051118 systemd[1]: Mounting tmp.mount... Mar 17 18:42:25.051133 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:42:25.051146 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:25.051169 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:42:25.051182 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:42:25.051195 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:25.051209 systemd[1]: Starting modprobe@drm.service... Mar 17 18:42:25.051221 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:25.051233 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:42:25.051246 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:25.051262 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:42:25.051274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:42:25.051287 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:42:25.051300 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:42:25.051313 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:42:25.051326 systemd[1]: Stopped systemd-journald.service. Mar 17 18:42:25.051340 systemd[1]: Starting systemd-journald.service... Mar 17 18:42:25.051353 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:42:25.051494 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:42:25.051518 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:42:25.051532 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:42:25.051545 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:42:25.051557 systemd[1]: Stopped verity-setup.service. Mar 17 18:42:25.051570 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:25.051594 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:42:25.051607 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:42:25.051620 systemd[1]: Mounted media.mount. Mar 17 18:42:25.051632 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:42:25.051647 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:42:25.051660 systemd[1]: Mounted tmp.mount. Mar 17 18:42:25.051672 kernel: fuse: init (API version 7.34) Mar 17 18:42:25.051685 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:42:25.051697 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:42:25.051722 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:42:25.051735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:25.051753 systemd-journald[958]: Journal started Mar 17 18:42:25.051824 systemd-journald[958]: Runtime Journal (/run/log/journal/1cbf670845ee4108a0a8769d1f971119) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:42:21.301000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:42:21.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:42:21.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:42:21.361000 audit: BPF prog-id=10 op=LOAD Mar 17 18:42:21.361000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:42:21.361000 audit: BPF prog-id=11 op=LOAD Mar 17 18:42:21.361000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:42:21.469000 audit[888]: AVC avc: denied { associate } for pid=888 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:42:21.469000 audit[888]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=871 pid=888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:21.469000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:42:21.471000 audit[888]: AVC avc: denied { associate } for pid=888 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:42:21.471000 audit[888]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=871 pid=888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:21.471000 audit: CWD cwd="/" Mar 17 18:42:21.471000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:21.471000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:21.471000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:42:24.856000 audit: BPF prog-id=12 op=LOAD Mar 17 18:42:24.856000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:42:24.856000 audit: BPF prog-id=13 op=LOAD Mar 17 18:42:24.856000 audit: BPF prog-id=14 op=LOAD Mar 17 18:42:24.856000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:42:24.856000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:42:24.857000 audit: BPF prog-id=15 op=LOAD Mar 17 18:42:24.857000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:42:24.857000 audit: BPF prog-id=16 op=LOAD Mar 17 18:42:24.857000 audit: BPF prog-id=17 op=LOAD Mar 17 18:42:24.857000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:42:24.857000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:42:24.859000 audit: BPF prog-id=18 op=LOAD Mar 17 18:42:24.859000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:42:24.859000 audit: BPF prog-id=19 op=LOAD Mar 17 18:42:24.859000 audit: BPF prog-id=20 op=LOAD Mar 17 18:42:24.859000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:42:24.859000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:42:24.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.867000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:42:24.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.985000 audit: BPF prog-id=21 op=LOAD Mar 17 18:42:24.985000 audit: BPF prog-id=22 op=LOAD Mar 17 18:42:24.985000 audit: BPF prog-id=23 op=LOAD Mar 17 18:42:24.985000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:42:24.985000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:42:25.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.047000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:42:25.047000 audit[958]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffccd2941f0 a2=4000 a3=7ffccd29428c items=0 ppid=1 pid=958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:25.047000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:42:25.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.467109 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:25.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.855085 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:42:21.467958 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:42:24.855109 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:42:21.467990 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:42:24.861376 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:42:25.055288 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:25.055331 systemd[1]: Started systemd-journald.service. Mar 17 18:42:21.468040 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:42:25.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:21.468057 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:42:21.468112 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:42:25.059737 kernel: loop: module loaded Mar 17 18:42:25.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.056462 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:42:21.468135 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:42:25.056980 systemd[1]: Finished modprobe@drm.service. Mar 17 18:42:21.468454 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:42:25.058308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:21.468522 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:42:25.058526 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:21.468545 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:42:25.059180 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:42:21.469731 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:42:25.059984 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:42:21.469791 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:42:25.060726 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:21.469824 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:42:21.469863 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:42:21.469934 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:42:21.469957 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:42:24.366732 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:24.367181 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:24.367402 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:25.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:24.367703 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:25.066071 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:24.367801 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:42:24.367920 /usr/lib/systemd/system-generators/torcx-generator[888]: time="2025-03-17T18:42:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:42:25.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.067508 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:42:25.068395 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:42:25.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.069181 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:42:25.070339 systemd[1]: Reached target network-pre.target. Mar 17 18:42:25.072581 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:42:25.078534 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:42:25.080166 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:42:25.085311 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:42:25.088264 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:42:25.088915 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:25.091891 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:42:25.092463 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:25.099240 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:25.103461 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:42:25.103980 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:42:25.123130 systemd-journald[958]: Time spent on flushing to /var/log/journal/1cbf670845ee4108a0a8769d1f971119 is 56.255ms for 1158 entries. Mar 17 18:42:25.123130 systemd-journald[958]: System Journal (/var/log/journal/1cbf670845ee4108a0a8769d1f971119) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:42:25.186520 systemd-journald[958]: Received client request to flush runtime journal. Mar 17 18:42:25.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.123206 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:42:25.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.125045 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:42:25.153644 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:25.161809 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:42:25.163738 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:42:25.187792 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:42:25.192136 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:42:25.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.194566 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:42:25.212067 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:42:25.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.215342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:42:25.217054 udevadm[999]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:42:25.255921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:42:25.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.910054 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:42:25.914620 kernel: kauditd_printk_skb: 109 callbacks suppressed Mar 17 18:42:25.914796 kernel: audit: type=1130 audit(1742236945.909:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.916929 kernel: audit: type=1334 audit(1742236945.913:150): prog-id=24 op=LOAD Mar 17 18:42:25.917113 kernel: audit: type=1334 audit(1742236945.913:151): prog-id=25 op=LOAD Mar 17 18:42:25.913000 audit: BPF prog-id=24 op=LOAD Mar 17 18:42:25.913000 audit: BPF prog-id=25 op=LOAD Mar 17 18:42:25.916183 systemd[1]: Starting systemd-udevd.service... Mar 17 18:42:25.913000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:42:25.913000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:42:25.919911 kernel: audit: type=1334 audit(1742236945.913:152): prog-id=7 op=UNLOAD Mar 17 18:42:25.920003 kernel: audit: type=1334 audit(1742236945.913:153): prog-id=8 op=UNLOAD Mar 17 18:42:25.945792 systemd-udevd[1002]: Using default interface naming scheme 'v252'. Mar 17 18:42:25.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.992016 kernel: audit: type=1130 audit(1742236945.987:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:25.987993 systemd[1]: Started systemd-udevd.service. Mar 17 18:42:25.993374 systemd[1]: Starting systemd-networkd.service... Mar 17 18:42:25.990000 audit: BPF prog-id=26 op=LOAD Mar 17 18:42:25.998877 kernel: audit: type=1334 audit(1742236945.990:155): prog-id=26 op=LOAD Mar 17 18:42:26.008533 kernel: audit: type=1334 audit(1742236946.004:156): prog-id=27 op=LOAD Mar 17 18:42:26.008692 kernel: audit: type=1334 audit(1742236946.005:157): prog-id=28 op=LOAD Mar 17 18:42:26.008734 kernel: audit: type=1334 audit(1742236946.006:158): prog-id=29 op=LOAD Mar 17 18:42:26.004000 audit: BPF prog-id=27 op=LOAD Mar 17 18:42:26.005000 audit: BPF prog-id=28 op=LOAD Mar 17 18:42:26.006000 audit: BPF prog-id=29 op=LOAD Mar 17 18:42:26.008405 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:42:26.060257 systemd[1]: Started systemd-userdbd.service. Mar 17 18:42:26.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.085025 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:42:26.108533 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:26.108779 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.110885 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:26.115243 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:26.120195 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:26.121083 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:42:26.121220 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:42:26.121380 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:26.122428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:26.122985 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:26.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.130119 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:26.130413 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:26.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.131374 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:26.135454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:26.135689 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:26.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.136673 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.213690 systemd-networkd[1004]: lo: Link UP Mar 17 18:42:26.214296 systemd-networkd[1004]: lo: Gained carrier Mar 17 18:42:26.215291 systemd-networkd[1004]: Enumeration completed Mar 17 18:42:26.215595 systemd-networkd[1004]: eth1: Configuring with /run/systemd/network/10-82:d9:20:8e:52:03.network. Mar 17 18:42:26.215647 systemd[1]: Started systemd-networkd.service. Mar 17 18:42:26.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.217833 systemd-networkd[1004]: eth0: Configuring with /run/systemd/network/10-1e:f1:51:a4:bf:df.network. Mar 17 18:42:26.219228 systemd-networkd[1004]: eth1: Link UP Mar 17 18:42:26.219373 systemd-networkd[1004]: eth1: Gained carrier Mar 17 18:42:26.225472 systemd-networkd[1004]: eth0: Link UP Mar 17 18:42:26.225489 systemd-networkd[1004]: eth0: Gained carrier Mar 17 18:42:26.228264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:42:26.260896 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:42:26.265919 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:42:26.289000 audit[1003]: AVC avc: denied { confidentiality } for pid=1003 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:42:26.289000 audit[1003]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5630b0dfada0 a1=338ac a2=7f4fb04babc5 a3=5 items=110 ppid=1002 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:26.289000 audit: CWD cwd="/" Mar 17 18:42:26.289000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=1 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=2 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=3 name=(null) inode=13725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=4 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=5 name=(null) inode=13726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=6 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=7 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=8 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=9 name=(null) inode=13728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=10 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=11 name=(null) inode=13729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=12 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=13 name=(null) inode=13730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=14 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=15 name=(null) inode=13731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=16 name=(null) inode=13727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=17 name=(null) inode=13732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=18 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=19 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=20 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=21 name=(null) inode=13734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=22 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=23 name=(null) inode=13735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=24 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=25 name=(null) inode=13736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=26 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=27 name=(null) inode=13737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=28 name=(null) inode=13733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=29 name=(null) inode=13738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=30 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=31 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=32 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=33 name=(null) inode=13740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=34 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=35 name=(null) inode=13741 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=36 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=37 name=(null) inode=13742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=38 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=39 name=(null) inode=13743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=40 name=(null) inode=13739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=41 name=(null) inode=13744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=42 name=(null) inode=13724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=43 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=44 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=45 name=(null) inode=13746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=46 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=47 name=(null) inode=13747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=48 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=49 name=(null) inode=13748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=50 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=51 name=(null) inode=13749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=52 name=(null) inode=13745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=53 name=(null) inode=13750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=55 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=56 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=57 name=(null) inode=13752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=58 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=59 name=(null) inode=13753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=60 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=61 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=62 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=63 name=(null) inode=13755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=64 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=65 name=(null) inode=13756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=66 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=67 name=(null) inode=13757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=68 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=69 name=(null) inode=13758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=70 name=(null) inode=13754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=71 name=(null) inode=13759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=72 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=73 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=74 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=75 name=(null) inode=13761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=76 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=77 name=(null) inode=13762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=78 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=79 name=(null) inode=13763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=80 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=81 name=(null) inode=13764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=82 name=(null) inode=13760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=83 name=(null) inode=13765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=84 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=85 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=86 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=87 name=(null) inode=13767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=88 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=89 name=(null) inode=13768 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=90 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=91 name=(null) inode=13769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=92 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=93 name=(null) inode=13770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=94 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=95 name=(null) inode=13771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=96 name=(null) inode=13751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=97 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=98 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=99 name=(null) inode=13773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=100 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=101 name=(null) inode=13774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=102 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=103 name=(null) inode=13775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=104 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=105 name=(null) inode=13776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=106 name=(null) inode=13772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=107 name=(null) inode=13777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PATH item=109 name=(null) inode=13778 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:26.289000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:42:26.330947 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 18:42:26.357882 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:42:26.365883 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:42:26.485111 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:42:26.527627 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:42:26.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.530618 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:42:26.559347 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:42:26.598209 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:42:26.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.598943 systemd[1]: Reached target cryptsetup.target. Mar 17 18:42:26.602382 systemd[1]: Starting lvm2-activation.service... Mar 17 18:42:26.610437 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:42:26.646431 systemd[1]: Finished lvm2-activation.service. Mar 17 18:42:26.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.647554 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:42:26.651075 systemd[1]: Mounting media-configdrive.mount... Mar 17 18:42:26.653797 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:42:26.653881 systemd[1]: Reached target machines.target. Mar 17 18:42:26.656030 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:42:26.678641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:42:26.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.681333 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 18:42:26.683003 systemd[1]: Mounted media-configdrive.mount. Mar 17 18:42:26.683619 systemd[1]: Reached target local-fs.target. Mar 17 18:42:26.685919 systemd[1]: Starting ldconfig.service... Mar 17 18:42:26.688582 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.688692 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:26.691275 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:42:26.695789 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:42:26.703115 systemd[1]: Starting systemd-sysext.service... Mar 17 18:42:26.714539 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Mar 17 18:42:26.716822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:42:26.728538 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:42:26.742496 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:42:26.742808 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:42:26.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.768659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:42:26.770613 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:42:26.773074 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:42:26.813395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:42:26.850892 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:42:26.854440 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) Mar 17 18:42:26.854440 systemd-fsck[1054]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:42:26.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.858386 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:42:26.860746 systemd[1]: Mounting boot.mount... Mar 17 18:42:26.889613 (sd-sysext)[1057]: Using extensions 'kubernetes'. Mar 17 18:42:26.890299 systemd[1]: Mounted boot.mount. Mar 17 18:42:26.894169 (sd-sysext)[1057]: Merged extensions into '/usr'. Mar 17 18:42:26.918202 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:42:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.937405 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:26.940567 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:42:26.941370 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.944527 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:26.949209 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:26.953228 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:26.954141 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.954355 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:26.954529 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:26.956450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:26.956818 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:26.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.959638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:26.960134 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:26.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.967407 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:42:26.970814 systemd[1]: Finished systemd-sysext.service. Mar 17 18:42:26.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.972759 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:26.973260 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:26.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:26.977516 systemd[1]: Starting ensure-sysext.service... Mar 17 18:42:26.978077 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:26.978240 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:26.980338 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:42:26.998164 systemd[1]: Reloading. Mar 17 18:42:27.013392 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:42:27.016647 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:42:27.020428 systemd-tmpfiles[1066]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:42:27.238256 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2025-03-17T18:42:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:27.238310 /usr/lib/systemd/system-generators/torcx-generator[1085]: time="2025-03-17T18:42:27Z" level=info msg="torcx already run" Mar 17 18:42:27.277468 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:42:27.354132 systemd-networkd[1004]: eth0: Gained IPv6LL Mar 17 18:42:27.421529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:27.421563 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:27.451282 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:27.539000 audit: BPF prog-id=30 op=LOAD Mar 17 18:42:27.539000 audit: BPF prog-id=31 op=LOAD Mar 17 18:42:27.539000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:42:27.539000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:42:27.541000 audit: BPF prog-id=32 op=LOAD Mar 17 18:42:27.541000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:42:27.541000 audit: BPF prog-id=33 op=LOAD Mar 17 18:42:27.541000 audit: BPF prog-id=34 op=LOAD Mar 17 18:42:27.541000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:42:27.541000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:42:27.544000 audit: BPF prog-id=35 op=LOAD Mar 17 18:42:27.544000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:42:27.544000 audit: BPF prog-id=36 op=LOAD Mar 17 18:42:27.545000 audit: BPF prog-id=37 op=LOAD Mar 17 18:42:27.545000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:42:27.545000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:42:27.546098 systemd-networkd[1004]: eth1: Gained IPv6LL Mar 17 18:42:27.556000 audit: BPF prog-id=38 op=LOAD Mar 17 18:42:27.556000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:42:27.562871 systemd[1]: Finished ldconfig.service. Mar 17 18:42:27.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.566127 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:42:27.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.573253 systemd[1]: Starting audit-rules.service... Mar 17 18:42:27.576522 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:42:27.583304 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:42:27.585000 audit: BPF prog-id=39 op=LOAD Mar 17 18:42:27.591000 audit: BPF prog-id=40 op=LOAD Mar 17 18:42:27.590688 systemd[1]: Starting systemd-resolved.service... Mar 17 18:42:27.594320 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:42:27.599165 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:42:27.610886 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:42:27.611829 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:27.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.619539 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.619000 audit[1138]: SYSTEM_BOOT pid=1138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.622065 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:27.626212 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:27.630986 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:27.631998 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.632248 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:27.632483 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:27.634160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:27.634438 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:27.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.644950 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:42:27.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.648435 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.652416 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:27.653445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.653705 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:27.654990 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:27.656395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:27.656648 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:27.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.658801 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:27.659071 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:27.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.666943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:27.667241 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:27.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.668896 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.671973 systemd[1]: Starting modprobe@drm.service... Mar 17 18:42:27.676130 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:27.682393 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:27.683156 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.683473 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:27.686725 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:42:27.689499 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:27.694559 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:42:27.694816 systemd[1]: Finished modprobe@drm.service. Mar 17 18:42:27.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.698036 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:27.698131 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:27.701141 systemd[1]: Finished ensure-sysext.service. Mar 17 18:42:27.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.704206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:27.704436 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:27.705207 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:27.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.709449 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:27.709611 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:27.710254 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.711083 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:42:27.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.717114 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:42:27.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.719808 systemd[1]: Starting systemd-update-done.service... Mar 17 18:42:27.743186 systemd[1]: Finished systemd-update-done.service. Mar 17 18:42:27.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:27.762000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:42:27.762000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5f7c50b0 a2=420 a3=0 items=0 ppid=1133 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:27.762000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:42:27.763495 augenrules[1162]: No rules Mar 17 18:42:27.764954 systemd[1]: Finished audit-rules.service. Mar 17 18:42:27.795320 systemd-resolved[1136]: Positive Trust Anchors: Mar 17 18:42:27.795928 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:42:27.796140 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:42:27.796398 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:42:27.797594 systemd[1]: Reached target time-set.target. Mar 17 18:42:27.809944 systemd-resolved[1136]: Using system hostname 'ci-3510.3.7-8-addee6c60b'. Mar 17 18:42:27.813927 systemd[1]: Started systemd-resolved.service. Mar 17 18:42:27.814574 systemd[1]: Reached target network.target. Mar 17 18:42:27.815073 systemd[1]: Reached target network-online.target. Mar 17 18:42:27.815476 systemd[1]: Reached target nss-lookup.target. Mar 17 18:42:27.823696 systemd[1]: Reached target sysinit.target. Mar 17 18:42:27.824305 systemd[1]: Started motdgen.path. Mar 17 18:42:27.824760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:42:27.825573 systemd[1]: Started logrotate.timer. Mar 17 18:42:27.826162 systemd[1]: Started mdadm.timer. Mar 17 18:42:27.826592 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:42:27.826960 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:42:27.827008 systemd[1]: Reached target paths.target. Mar 17 18:42:27.827383 systemd[1]: Reached target timers.target. Mar 17 18:42:27.828749 systemd[1]: Listening on dbus.socket. Mar 17 18:42:27.831364 systemd[1]: Starting docker.socket... Mar 17 18:42:27.837682 systemd[1]: Listening on sshd.socket. Mar 17 18:42:27.838804 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:27.840032 systemd[1]: Listening on docker.socket. Mar 17 18:42:27.841202 systemd[1]: Reached target sockets.target. Mar 17 18:42:27.841934 systemd[1]: Reached target basic.target. Mar 17 18:42:27.842584 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.842778 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:42:27.845009 systemd[1]: Starting containerd.service... Mar 17 18:42:27.847700 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:42:27.850343 systemd[1]: Starting dbus.service... Mar 17 18:42:27.854411 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:42:27.861183 systemd[1]: Starting extend-filesystems.service... Mar 17 18:42:27.884565 jq[1175]: false Mar 17 18:42:27.861764 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:42:27.864230 systemd[1]: Starting kubelet.service... Mar 17 18:42:27.867202 systemd[1]: Starting motdgen.service... Mar 17 18:42:27.871576 systemd[1]: Starting prepare-helm.service... Mar 17 18:42:27.874405 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:42:27.878794 systemd[1]: Starting sshd-keygen.service... Mar 17 18:42:27.886201 systemd[1]: Starting systemd-logind.service... Mar 17 18:42:27.886970 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:27.887104 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:42:27.888134 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:42:27.892397 systemd[1]: Starting update-engine.service... Mar 17 18:42:27.897102 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:42:27.907645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:42:27.908169 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:42:27.913641 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:42:27.916154 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:42:27.948411 jq[1186]: true Mar 17 18:42:27.949557 tar[1189]: linux-amd64/helm Mar 17 18:42:27.974585 dbus-daemon[1172]: [system] SELinux support is enabled Mar 17 18:42:27.988803 systemd[1]: Started dbus.service. Mar 17 18:42:27.993497 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:42:27.993544 systemd[1]: Reached target system-config.target. Mar 17 18:42:27.994177 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:42:27.994225 systemd[1]: Reached target user-config.target. Mar 17 18:42:28.003537 jq[1199]: true Mar 17 18:42:28.030451 extend-filesystems[1176]: Found loop1 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda1 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda2 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda3 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found usr Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda4 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda6 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda7 Mar 17 18:42:28.031493 extend-filesystems[1176]: Found vda9 Mar 17 18:42:28.031493 extend-filesystems[1176]: Checking size of /dev/vda9 Mar 17 18:42:28.044983 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:42:28.045252 systemd[1]: Finished motdgen.service. Mar 17 18:42:28.097279 extend-filesystems[1176]: Resized partition /dev/vda9 Mar 17 18:42:28.124028 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:42:28.141911 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 18:42:28.150515 update_engine[1185]: I0317 18:42:28.149833 1185 main.cc:92] Flatcar Update Engine starting Mar 17 18:42:28.152539 bash[1225]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:42:28.155255 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:42:28.157394 update_engine[1185]: I0317 18:42:28.157344 1185 update_check_scheduler.cc:74] Next update check in 2m32s Mar 17 18:42:28.157435 systemd[1]: Started update-engine.service. Mar 17 18:42:28.160277 systemd[1]: Started locksmithd.service. Mar 17 18:42:28.237796 env[1192]: time="2025-03-17T18:42:28.237684747Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:42:28.279918 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 18:42:28.310966 extend-filesystems[1226]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:42:28.310966 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 18:42:28.310966 extend-filesystems[1226]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 18:42:28.313928 extend-filesystems[1176]: Resized filesystem in /dev/vda9 Mar 17 18:42:28.313928 extend-filesystems[1176]: Found vdb Mar 17 18:42:28.312270 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:42:28.312578 systemd[1]: Finished extend-filesystems.service. Mar 17 18:42:28.348266 systemd-logind[1183]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:42:28.348951 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:42:28.353351 systemd-logind[1183]: New seat seat0. Mar 17 18:42:28.358290 systemd[1]: Started systemd-logind.service. Mar 17 18:42:28.395122 env[1192]: time="2025-03-17T18:42:28.395042836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:42:28.395314 env[1192]: time="2025-03-17T18:42:28.395293804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.397566 coreos-metadata[1171]: Mar 17 18:42:28.397 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:42:28.402964 env[1192]: time="2025-03-17T18:42:28.402901604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:28.402964 env[1192]: time="2025-03-17T18:42:28.402945876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.403335 env[1192]: time="2025-03-17T18:42:28.403297402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:28.403335 env[1192]: time="2025-03-17T18:42:28.403328124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.403482 env[1192]: time="2025-03-17T18:42:28.403343458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:42:28.403482 env[1192]: time="2025-03-17T18:42:28.403354032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.403482 env[1192]: time="2025-03-17T18:42:28.403467364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.403827 env[1192]: time="2025-03-17T18:42:28.403766542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:28.404075 env[1192]: time="2025-03-17T18:42:28.404044279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:28.404075 env[1192]: time="2025-03-17T18:42:28.404070893Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:42:28.404199 env[1192]: time="2025-03-17T18:42:28.404144913Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:42:28.404199 env[1192]: time="2025-03-17T18:42:28.404159708Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419329187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419395223Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419410509Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419534534Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419552125Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419634093Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419664025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419679552Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419693293Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419708275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419731147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419746308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.419972311Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:42:28.420920 env[1192]: time="2025-03-17T18:42:28.420105677Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420521409Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420555009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420573302Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420634618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420648218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420719647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420751427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420765177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420791542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420808574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420839518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.420885642Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.421159740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.421214553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.421927 env[1192]: time="2025-03-17T18:42:28.421237484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.422516 env[1192]: time="2025-03-17T18:42:28.421250092Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:42:28.422516 env[1192]: time="2025-03-17T18:42:28.421280500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:42:28.422516 env[1192]: time="2025-03-17T18:42:28.421294029Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:42:28.422516 env[1192]: time="2025-03-17T18:42:28.421319683Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:42:28.422516 env[1192]: time="2025-03-17T18:42:28.421384262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:42:28.422794 env[1192]: time="2025-03-17T18:42:28.421643583Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:42:28.422794 env[1192]: time="2025-03-17T18:42:28.421778191Z" level=info msg="Connect containerd service" Mar 17 18:42:28.422794 env[1192]: time="2025-03-17T18:42:28.421840442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.423098451Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.423665562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.423727863Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.424836546Z" level=info msg="containerd successfully booted in 0.192667s" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425082569Z" level=info msg="Start subscribing containerd event" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425167529Z" level=info msg="Start recovering state" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425300963Z" level=info msg="Start event monitor" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425355034Z" level=info msg="Start snapshots syncer" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425372609Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:42:28.429058 env[1192]: time="2025-03-17T18:42:28.425384105Z" level=info msg="Start streaming server" Mar 17 18:42:28.429515 coreos-metadata[1171]: Mar 17 18:42:28.426 INFO Fetch successful Mar 17 18:42:28.423946 systemd[1]: Started containerd.service. Mar 17 18:42:28.436568 unknown[1171]: wrote ssh authorized keys file for user: core Mar 17 18:42:28.463614 update-ssh-keys[1234]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:42:28.465107 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:42:29.210830 locksmithd[1227]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:42:29.373719 tar[1189]: linux-amd64/LICENSE Mar 17 18:42:29.375605 tar[1189]: linux-amd64/README.md Mar 17 18:42:29.387916 systemd[1]: Finished prepare-helm.service. Mar 17 18:42:29.672398 systemd[1]: Started kubelet.service. Mar 17 18:42:29.752195 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:42:29.782034 systemd[1]: Finished sshd-keygen.service. Mar 17 18:42:29.784704 systemd[1]: Starting issuegen.service... Mar 17 18:42:29.797409 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:42:29.797702 systemd[1]: Finished issuegen.service. Mar 17 18:42:29.801145 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:42:29.813479 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:42:29.817583 systemd[1]: Started getty@tty1.service. Mar 17 18:42:29.821620 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:42:29.822573 systemd[1]: Reached target getty.target. Mar 17 18:42:29.823215 systemd[1]: Reached target multi-user.target. Mar 17 18:42:29.826132 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:42:29.844685 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:42:29.845167 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:42:29.845834 systemd[1]: Startup finished in 1.054s (kernel) + 5.397s (initrd) + 8.604s (userspace) = 15.056s. Mar 17 18:42:30.523300 kubelet[1242]: E0317 18:42:30.523210 1242 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:30.527368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:30.527598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:30.528017 systemd[1]: kubelet.service: Consumed 1.429s CPU time. Mar 17 18:42:30.794725 systemd[1]: Created slice system-sshd.slice. Mar 17 18:42:30.796770 systemd[1]: Started sshd@0-146.190.61.194:22-139.178.68.195:38212.service. Mar 17 18:42:30.879341 sshd[1264]: Accepted publickey for core from 139.178.68.195 port 38212 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:42:30.883513 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:30.898061 systemd[1]: Created slice user-500.slice. Mar 17 18:42:30.899654 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:42:30.903940 systemd-logind[1183]: New session 1 of user core. Mar 17 18:42:30.915404 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:42:30.917836 systemd[1]: Starting user@500.service... Mar 17 18:42:30.924374 (systemd)[1267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:31.026002 systemd[1267]: Queued start job for default target default.target. Mar 17 18:42:31.027285 systemd[1267]: Reached target paths.target. Mar 17 18:42:31.027512 systemd[1267]: Reached target sockets.target. Mar 17 18:42:31.027772 systemd[1267]: Reached target timers.target. Mar 17 18:42:31.027942 systemd[1267]: Reached target basic.target. Mar 17 18:42:31.028173 systemd[1]: Started user@500.service. Mar 17 18:42:31.029534 systemd[1267]: Reached target default.target. Mar 17 18:42:31.029609 systemd[1]: Started session-1.scope. Mar 17 18:42:31.029619 systemd[1267]: Startup finished in 93ms. Mar 17 18:42:31.097186 systemd[1]: Started sshd@1-146.190.61.194:22-139.178.68.195:38228.service. Mar 17 18:42:31.148272 sshd[1276]: Accepted publickey for core from 139.178.68.195 port 38228 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:42:31.151111 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:31.159554 systemd[1]: Started session-2.scope. Mar 17 18:42:31.160692 systemd-logind[1183]: New session 2 of user core. Mar 17 18:42:31.235773 sshd[1276]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:31.243559 systemd[1]: Started sshd@2-146.190.61.194:22-139.178.68.195:38234.service. Mar 17 18:42:31.244919 systemd[1]: sshd@1-146.190.61.194:22-139.178.68.195:38228.service: Deactivated successfully. Mar 17 18:42:31.246233 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:42:31.247170 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:42:31.248334 systemd-logind[1183]: Removed session 2. Mar 17 18:42:31.294053 sshd[1281]: Accepted publickey for core from 139.178.68.195 port 38234 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:42:31.297436 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:31.303706 systemd-logind[1183]: New session 3 of user core. Mar 17 18:42:31.304460 systemd[1]: Started session-3.scope. Mar 17 18:42:31.371207 sshd[1281]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:31.378071 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:42:31.378410 systemd[1]: sshd@2-146.190.61.194:22-139.178.68.195:38234.service: Deactivated successfully. Mar 17 18:42:31.379322 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:42:31.382298 systemd[1]: Started sshd@3-146.190.61.194:22-139.178.68.195:38250.service. Mar 17 18:42:31.384356 systemd-logind[1183]: Removed session 3. Mar 17 18:42:31.432199 sshd[1288]: Accepted publickey for core from 139.178.68.195 port 38250 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:42:31.434927 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:31.441635 systemd[1]: Started session-4.scope. Mar 17 18:42:31.442317 systemd-logind[1183]: New session 4 of user core. Mar 17 18:42:31.512501 sshd[1288]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:31.518002 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:42:31.518386 systemd[1]: sshd@3-146.190.61.194:22-139.178.68.195:38250.service: Deactivated successfully. Mar 17 18:42:31.519455 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:42:31.522474 systemd[1]: Started sshd@4-146.190.61.194:22-139.178.68.195:38254.service. Mar 17 18:42:31.524198 systemd-logind[1183]: Removed session 4. Mar 17 18:42:31.568335 sshd[1294]: Accepted publickey for core from 139.178.68.195 port 38254 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:42:31.570960 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:31.578012 systemd-logind[1183]: New session 5 of user core. Mar 17 18:42:31.578995 systemd[1]: Started session-5.scope. Mar 17 18:42:31.655134 sudo[1297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:42:31.656312 sudo[1297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:42:31.699151 systemd[1]: Starting docker.service... Mar 17 18:42:31.759252 env[1307]: time="2025-03-17T18:42:31.759180840Z" level=info msg="Starting up" Mar 17 18:42:31.762139 env[1307]: time="2025-03-17T18:42:31.762077354Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:42:31.762139 env[1307]: time="2025-03-17T18:42:31.762118713Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:42:31.762438 env[1307]: time="2025-03-17T18:42:31.762150252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:42:31.762438 env[1307]: time="2025-03-17T18:42:31.762168183Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:42:31.765216 env[1307]: time="2025-03-17T18:42:31.765177125Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:42:31.765408 env[1307]: time="2025-03-17T18:42:31.765392790Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:42:31.765572 env[1307]: time="2025-03-17T18:42:31.765553241Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:42:31.765656 env[1307]: time="2025-03-17T18:42:31.765641709Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:42:31.859123 env[1307]: time="2025-03-17T18:42:31.859053916Z" level=info msg="Loading containers: start." Mar 17 18:42:32.025889 kernel: Initializing XFRM netlink socket Mar 17 18:42:32.068916 env[1307]: time="2025-03-17T18:42:32.068833526Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:42:32.071545 systemd-timesyncd[1137]: Network configuration changed, trying to establish connection. Mar 17 18:42:32.088327 systemd-timesyncd[1137]: Network configuration changed, trying to establish connection. Mar 17 18:42:32.158282 systemd-networkd[1004]: docker0: Link UP Mar 17 18:42:32.158785 systemd-timesyncd[1137]: Network configuration changed, trying to establish connection. Mar 17 18:42:32.177829 env[1307]: time="2025-03-17T18:42:32.177783978Z" level=info msg="Loading containers: done." Mar 17 18:42:32.194720 env[1307]: time="2025-03-17T18:42:32.193478697Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:42:32.194720 env[1307]: time="2025-03-17T18:42:32.193733943Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:42:32.194720 env[1307]: time="2025-03-17T18:42:32.193921385Z" level=info msg="Daemon has completed initialization" Mar 17 18:42:32.210917 systemd[1]: Started docker.service. Mar 17 18:42:32.222631 env[1307]: time="2025-03-17T18:42:32.222533988Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:42:32.253331 systemd[1]: Starting coreos-metadata.service... Mar 17 18:42:32.304231 coreos-metadata[1427]: Mar 17 18:42:32.304 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:42:32.317773 coreos-metadata[1427]: Mar 17 18:42:32.317 INFO Fetch successful Mar 17 18:42:32.334018 systemd[1]: Finished coreos-metadata.service. Mar 17 18:42:33.371065 env[1192]: time="2025-03-17T18:42:33.370993479Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:42:33.941895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883625500.mount: Deactivated successfully. Mar 17 18:42:35.775651 env[1192]: time="2025-03-17T18:42:35.775582102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:35.777653 env[1192]: time="2025-03-17T18:42:35.777606594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:35.779941 env[1192]: time="2025-03-17T18:42:35.779843952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:35.782230 env[1192]: time="2025-03-17T18:42:35.782177940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:35.783565 env[1192]: time="2025-03-17T18:42:35.783515472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:42:35.797987 env[1192]: time="2025-03-17T18:42:35.797933041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:42:37.888863 env[1192]: time="2025-03-17T18:42:37.888619271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:37.890551 env[1192]: time="2025-03-17T18:42:37.890478358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:37.894593 env[1192]: time="2025-03-17T18:42:37.894512001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:37.896777 env[1192]: time="2025-03-17T18:42:37.896602257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:42:37.897037 env[1192]: time="2025-03-17T18:42:37.895693930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:37.912339 env[1192]: time="2025-03-17T18:42:37.912275970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:42:39.435395 env[1192]: time="2025-03-17T18:42:39.435327490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:39.437899 env[1192]: time="2025-03-17T18:42:39.437829537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:39.439597 env[1192]: time="2025-03-17T18:42:39.439541989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:39.442035 env[1192]: time="2025-03-17T18:42:39.441984359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:39.442971 env[1192]: time="2025-03-17T18:42:39.442930615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:42:39.462619 env[1192]: time="2025-03-17T18:42:39.462555549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:42:40.726690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:42:40.726951 systemd[1]: Stopped kubelet.service. Mar 17 18:42:40.727006 systemd[1]: kubelet.service: Consumed 1.429s CPU time. Mar 17 18:42:40.729396 systemd[1]: Starting kubelet.service... Mar 17 18:42:40.761116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678717414.mount: Deactivated successfully. Mar 17 18:42:40.867875 systemd[1]: Started kubelet.service. Mar 17 18:42:40.969414 kubelet[1473]: E0317 18:42:40.969349 1473 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:40.973276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:40.973449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:41.535504 env[1192]: time="2025-03-17T18:42:41.535426841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:41.538018 env[1192]: time="2025-03-17T18:42:41.537923950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:41.539555 env[1192]: time="2025-03-17T18:42:41.539476384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:41.542004 env[1192]: time="2025-03-17T18:42:41.541938252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:41.543056 env[1192]: time="2025-03-17T18:42:41.542999945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:42:41.562529 env[1192]: time="2025-03-17T18:42:41.562456948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:42:42.082336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227103395.mount: Deactivated successfully. Mar 17 18:42:43.188427 env[1192]: time="2025-03-17T18:42:43.188347236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.190508 env[1192]: time="2025-03-17T18:42:43.190446508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.193142 env[1192]: time="2025-03-17T18:42:43.193082890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.196330 env[1192]: time="2025-03-17T18:42:43.196261326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.197543 env[1192]: time="2025-03-17T18:42:43.197478272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:42:43.219321 env[1192]: time="2025-03-17T18:42:43.219260762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:42:43.629203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697240644.mount: Deactivated successfully. Mar 17 18:42:43.634512 env[1192]: time="2025-03-17T18:42:43.634450469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.635905 env[1192]: time="2025-03-17T18:42:43.635845277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.637791 env[1192]: time="2025-03-17T18:42:43.637739577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.639463 env[1192]: time="2025-03-17T18:42:43.639412586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:43.640980 env[1192]: time="2025-03-17T18:42:43.640931573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:42:43.667159 env[1192]: time="2025-03-17T18:42:43.667110846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:42:44.168045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288778569.mount: Deactivated successfully. Mar 17 18:42:46.749596 env[1192]: time="2025-03-17T18:42:46.749486444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:46.752543 env[1192]: time="2025-03-17T18:42:46.752469153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:46.755514 env[1192]: time="2025-03-17T18:42:46.755462387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:46.758227 env[1192]: time="2025-03-17T18:42:46.758178570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:46.759434 env[1192]: time="2025-03-17T18:42:46.759388626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:42:50.107340 systemd[1]: Stopped kubelet.service. Mar 17 18:42:50.111508 systemd[1]: Starting kubelet.service... Mar 17 18:42:50.149280 systemd[1]: Reloading. Mar 17 18:42:50.340478 /usr/lib/systemd/system-generators/torcx-generator[1582]: time="2025-03-17T18:42:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:50.342006 /usr/lib/systemd/system-generators/torcx-generator[1582]: time="2025-03-17T18:42:50Z" level=info msg="torcx already run" Mar 17 18:42:50.513549 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:50.513808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:50.542488 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:50.691004 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:42:50.691346 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:42:50.691953 systemd[1]: Stopped kubelet.service. Mar 17 18:42:50.695386 systemd[1]: Starting kubelet.service... Mar 17 18:42:50.763410 systemd[1]: Started sshd@5-146.190.61.194:22-92.118.39.87:47820.service. Mar 17 18:42:50.838520 systemd[1]: Started kubelet.service. Mar 17 18:42:50.929708 kubelet[1637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:50.930336 kubelet[1637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:42:50.930437 kubelet[1637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:50.932390 kubelet[1637]: I0317 18:42:50.932265 1637 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:42:51.334454 kubelet[1637]: I0317 18:42:51.334374 1637 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:42:51.334800 kubelet[1637]: I0317 18:42:51.334777 1637 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:42:51.335289 kubelet[1637]: I0317 18:42:51.335258 1637 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:42:51.359189 kubelet[1637]: I0317 18:42:51.359126 1637 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:42:51.359616 kubelet[1637]: E0317 18:42:51.359570 1637 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.61.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.377677 kubelet[1637]: I0317 18:42:51.377618 1637 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:42:51.381185 kubelet[1637]: I0317 18:42:51.381073 1637 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:42:51.381448 kubelet[1637]: I0317 18:42:51.381174 1637 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-8-addee6c60b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:42:51.382260 kubelet[1637]: I0317 18:42:51.382157 1637 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:42:51.382260 kubelet[1637]: I0317 18:42:51.382208 1637 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:42:51.382518 kubelet[1637]: I0317 18:42:51.382439 1637 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:51.383715 kubelet[1637]: I0317 18:42:51.383666 1637 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:42:51.383715 kubelet[1637]: I0317 18:42:51.383704 1637 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:42:51.383957 kubelet[1637]: I0317 18:42:51.383745 1637 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:42:51.383957 kubelet[1637]: I0317 18:42:51.383771 1637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:42:51.393356 kubelet[1637]: W0317 18:42:51.393262 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.61.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-8-addee6c60b&limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.393701 kubelet[1637]: E0317 18:42:51.393669 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.61.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-8-addee6c60b&limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.400816 kubelet[1637]: W0317 18:42:51.400091 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.61.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.400816 kubelet[1637]: E0317 18:42:51.400184 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.61.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.401257 kubelet[1637]: I0317 18:42:51.401116 1637 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:42:51.403036 kubelet[1637]: I0317 18:42:51.402978 1637 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:42:51.403226 kubelet[1637]: W0317 18:42:51.403091 1637 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:42:51.404242 kubelet[1637]: I0317 18:42:51.403920 1637 server.go:1264] "Started kubelet" Mar 17 18:42:51.420080 kubelet[1637]: I0317 18:42:51.420025 1637 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:42:51.421906 kubelet[1637]: I0317 18:42:51.421868 1637 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:42:51.422230 kubelet[1637]: I0317 18:42:51.422150 1637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:42:51.422805 kubelet[1637]: I0317 18:42:51.422768 1637 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:42:51.425501 kubelet[1637]: E0317 18:42:51.425157 1637 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.61.194:6443/api/v1/namespaces/default/events\": dial tcp 146.190.61.194:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-8-addee6c60b.182dab4b7314aa3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-8-addee6c60b,UID:ci-3510.3.7-8-addee6c60b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-8-addee6c60b,},FirstTimestamp:2025-03-17 18:42:51.403881021 +0000 UTC m=+0.557751865,LastTimestamp:2025-03-17 18:42:51.403881021 +0000 UTC m=+0.557751865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-8-addee6c60b,}" Mar 17 18:42:51.431173 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:42:51.431497 kubelet[1637]: I0317 18:42:51.431448 1637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:42:51.436503 kubelet[1637]: E0317 18:42:51.436466 1637 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.7-8-addee6c60b\" not found" Mar 17 18:42:51.436856 kubelet[1637]: I0317 18:42:51.436831 1637 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:42:51.437115 kubelet[1637]: I0317 18:42:51.437096 1637 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:42:51.437276 kubelet[1637]: I0317 18:42:51.437258 1637 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:42:51.438048 kubelet[1637]: W0317 18:42:51.437987 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.61.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.438205 kubelet[1637]: E0317 18:42:51.438189 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.61.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.438634 kubelet[1637]: E0317 18:42:51.438605 1637 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:42:51.440042 kubelet[1637]: E0317 18:42:51.439930 1637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.61.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-8-addee6c60b?timeout=10s\": dial tcp 146.190.61.194:6443: connect: connection refused" interval="200ms" Mar 17 18:42:51.441162 sshd[1633]: Invalid user ubuntu from 92.118.39.87 port 47820 Mar 17 18:42:51.443016 kubelet[1637]: I0317 18:42:51.442975 1637 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:42:51.445816 kubelet[1637]: I0317 18:42:51.445777 1637 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:42:51.446077 kubelet[1637]: I0317 18:42:51.446057 1637 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:42:51.470062 kubelet[1637]: I0317 18:42:51.470013 1637 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:42:51.470309 kubelet[1637]: I0317 18:42:51.470288 1637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:42:51.470481 kubelet[1637]: I0317 18:42:51.470465 1637 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:51.473166 kubelet[1637]: I0317 18:42:51.473125 1637 policy_none.go:49] "None policy: Start" Mar 17 18:42:51.474613 kubelet[1637]: I0317 18:42:51.474570 1637 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:42:51.474891 kubelet[1637]: I0317 18:42:51.474844 1637 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:42:51.485855 systemd[1]: Created slice kubepods.slice. Mar 17 18:42:51.495096 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:42:51.500145 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:42:51.510859 kubelet[1637]: I0317 18:42:51.510805 1637 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:42:51.513021 kubelet[1637]: I0317 18:42:51.511824 1637 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:42:51.513021 kubelet[1637]: I0317 18:42:51.512053 1637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:42:51.518388 kubelet[1637]: E0317 18:42:51.518343 1637 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-8-addee6c60b\" not found" Mar 17 18:42:51.529341 kubelet[1637]: I0317 18:42:51.529216 1637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:42:51.531604 kubelet[1637]: I0317 18:42:51.531537 1637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:42:51.531604 kubelet[1637]: I0317 18:42:51.531593 1637 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:42:51.531817 kubelet[1637]: I0317 18:42:51.531632 1637 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:42:51.531817 kubelet[1637]: E0317 18:42:51.531707 1637 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:42:51.540964 kubelet[1637]: I0317 18:42:51.540510 1637 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.542256 kubelet[1637]: E0317 18:42:51.542195 1637 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.61.194:6443/api/v1/nodes\": dial tcp 146.190.61.194:6443: connect: connection refused" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.542638 kubelet[1637]: W0317 18:42:51.542208 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.61.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.542921 kubelet[1637]: E0317 18:42:51.542892 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.61.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:51.613868 sshd[1633]: pam_faillock(sshd:auth): User unknown Mar 17 18:42:51.617215 sshd[1633]: pam_unix(sshd:auth): check pass; user unknown Mar 17 18:42:51.617290 sshd[1633]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.118.39.87 Mar 17 18:42:51.618558 sshd[1633]: pam_faillock(sshd:auth): User unknown Mar 17 18:42:51.632326 kubelet[1637]: I0317 18:42:51.632209 1637 topology_manager.go:215] "Topology Admit Handler" podUID="70211c3ef14bb94f1540b17116479bba" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.634237 kubelet[1637]: I0317 18:42:51.634182 1637 topology_manager.go:215] "Topology Admit Handler" podUID="f6cd0febee0a3753e4200e3c7c8fe9f1" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.636228 kubelet[1637]: I0317 18:42:51.636158 1637 topology_manager.go:215] "Topology Admit Handler" podUID="9dc9f43c8778da066267aee3ff54b476" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.642208 kubelet[1637]: I0317 18:42:51.642166 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.642495 kubelet[1637]: I0317 18:42:51.642463 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6cd0febee0a3753e4200e3c7c8fe9f1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-8-addee6c60b\" (UID: \"f6cd0febee0a3753e4200e3c7c8fe9f1\") " pod="kube-system/kube-scheduler-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.642642 kubelet[1637]: I0317 18:42:51.642616 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.642748 kubelet[1637]: I0317 18:42:51.642732 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.642963 kubelet[1637]: I0317 18:42:51.642943 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.643112 kubelet[1637]: I0317 18:42:51.643093 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.643246 kubelet[1637]: I0317 18:42:51.643221 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.643511 kubelet[1637]: I0317 18:42:51.643346 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.643830 kubelet[1637]: E0317 18:42:51.642941 1637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.61.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-8-addee6c60b?timeout=10s\": dial tcp 146.190.61.194:6443: connect: connection refused" interval="400ms" Mar 17 18:42:51.643979 kubelet[1637]: I0317 18:42:51.643803 1637 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.647556 systemd[1]: Created slice kubepods-burstable-pod70211c3ef14bb94f1540b17116479bba.slice. Mar 17 18:42:51.663127 systemd[1]: Created slice kubepods-burstable-pod9dc9f43c8778da066267aee3ff54b476.slice. Mar 17 18:42:51.671178 systemd[1]: Created slice kubepods-burstable-podf6cd0febee0a3753e4200e3c7c8fe9f1.slice. Mar 17 18:42:51.746133 kubelet[1637]: I0317 18:42:51.746036 1637 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.746741 kubelet[1637]: E0317 18:42:51.746678 1637 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.61.194:6443/api/v1/nodes\": dial tcp 146.190.61.194:6443: connect: connection refused" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:51.957040 kubelet[1637]: E0317 18:42:51.956790 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:51.959687 env[1192]: time="2025-03-17T18:42:51.959129081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-8-addee6c60b,Uid:70211c3ef14bb94f1540b17116479bba,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:51.968364 kubelet[1637]: E0317 18:42:51.968307 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:51.969470 env[1192]: time="2025-03-17T18:42:51.969413047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-8-addee6c60b,Uid:9dc9f43c8778da066267aee3ff54b476,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:51.980503 kubelet[1637]: E0317 18:42:51.979025 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:51.980757 env[1192]: time="2025-03-17T18:42:51.979807926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-8-addee6c60b,Uid:f6cd0febee0a3753e4200e3c7c8fe9f1,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:52.045536 kubelet[1637]: E0317 18:42:52.045430 1637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.61.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-8-addee6c60b?timeout=10s\": dial tcp 146.190.61.194:6443: connect: connection refused" interval="800ms" Mar 17 18:42:52.149756 kubelet[1637]: I0317 18:42:52.149326 1637 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:52.149756 kubelet[1637]: E0317 18:42:52.149709 1637 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.61.194:6443/api/v1/nodes\": dial tcp 146.190.61.194:6443: connect: connection refused" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:52.458435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387703431.mount: Deactivated successfully. Mar 17 18:42:52.463981 env[1192]: time="2025-03-17T18:42:52.463919326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.466825 env[1192]: time="2025-03-17T18:42:52.466762610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.468978 env[1192]: time="2025-03-17T18:42:52.468702167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.475455 env[1192]: time="2025-03-17T18:42:52.475386469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.477280 env[1192]: time="2025-03-17T18:42:52.477211921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.478764 env[1192]: time="2025-03-17T18:42:52.478670687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.483415 kubelet[1637]: W0317 18:42:52.483274 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.61.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.483415 kubelet[1637]: E0317 18:42:52.483373 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.61.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.485490 env[1192]: time="2025-03-17T18:42:52.485390237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.489549 env[1192]: time="2025-03-17T18:42:52.489488944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.494025 env[1192]: time="2025-03-17T18:42:52.493944040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.495437 env[1192]: time="2025-03-17T18:42:52.495361502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.503553 env[1192]: time="2025-03-17T18:42:52.503491982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.505191 env[1192]: time="2025-03-17T18:42:52.505132248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:52.538914 env[1192]: time="2025-03-17T18:42:52.538771938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:52.539231 env[1192]: time="2025-03-17T18:42:52.539161380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:52.539346 env[1192]: time="2025-03-17T18:42:52.539209971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:52.539346 env[1192]: time="2025-03-17T18:42:52.539222216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:52.539470 env[1192]: time="2025-03-17T18:42:52.539390819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4fc0ec9ce059f99ae8e7d0aaed1bb0460aa671dcf98fc202665e093b27a41fc pid=1685 runtime=io.containerd.runc.v2 Mar 17 18:42:52.539632 env[1192]: time="2025-03-17T18:42:52.539587436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:52.539812 env[1192]: time="2025-03-17T18:42:52.539774907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:52.540247 env[1192]: time="2025-03-17T18:42:52.540181636Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4731386809cc11045e6c42b64a6125660a1d64ee34825a6fbfd0b52c510038 pid=1675 runtime=io.containerd.runc.v2 Mar 17 18:42:52.549430 env[1192]: time="2025-03-17T18:42:52.549235599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:52.549430 env[1192]: time="2025-03-17T18:42:52.549326258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:52.549825 env[1192]: time="2025-03-17T18:42:52.549343241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:52.550990 env[1192]: time="2025-03-17T18:42:52.550892382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90c6b86b6e662d833cbd4162b42bb19743cfb2e404da44d197cb58fa6fb58699 pid=1702 runtime=io.containerd.runc.v2 Mar 17 18:42:52.579943 systemd[1]: Started cri-containerd-fa4731386809cc11045e6c42b64a6125660a1d64ee34825a6fbfd0b52c510038.scope. Mar 17 18:42:52.610537 systemd[1]: Started cri-containerd-90c6b86b6e662d833cbd4162b42bb19743cfb2e404da44d197cb58fa6fb58699.scope. Mar 17 18:42:52.626109 systemd[1]: Started cri-containerd-f4fc0ec9ce059f99ae8e7d0aaed1bb0460aa671dcf98fc202665e093b27a41fc.scope. Mar 17 18:42:52.658345 kubelet[1637]: W0317 18:42:52.658188 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.61.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-8-addee6c60b&limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.658345 kubelet[1637]: E0317 18:42:52.658307 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.61.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-8-addee6c60b&limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.706894 env[1192]: time="2025-03-17T18:42:52.705071116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-8-addee6c60b,Uid:f6cd0febee0a3753e4200e3c7c8fe9f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa4731386809cc11045e6c42b64a6125660a1d64ee34825a6fbfd0b52c510038\"" Mar 17 18:42:52.707314 kubelet[1637]: E0317 18:42:52.706504 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:52.713690 env[1192]: time="2025-03-17T18:42:52.712062464Z" level=info msg="CreateContainer within sandbox \"fa4731386809cc11045e6c42b64a6125660a1d64ee34825a6fbfd0b52c510038\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:42:52.741440 env[1192]: time="2025-03-17T18:42:52.741378282Z" level=info msg="CreateContainer within sandbox \"fa4731386809cc11045e6c42b64a6125660a1d64ee34825a6fbfd0b52c510038\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8fe9c0aa933e36e23f6531988f1897f8567007b9bc897ff54b0c3533a88c6c43\"" Mar 17 18:42:52.744473 env[1192]: time="2025-03-17T18:42:52.743554243Z" level=info msg="StartContainer for \"8fe9c0aa933e36e23f6531988f1897f8567007b9bc897ff54b0c3533a88c6c43\"" Mar 17 18:42:52.746579 env[1192]: time="2025-03-17T18:42:52.746520125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-8-addee6c60b,Uid:70211c3ef14bb94f1540b17116479bba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4fc0ec9ce059f99ae8e7d0aaed1bb0460aa671dcf98fc202665e093b27a41fc\"" Mar 17 18:42:52.747620 kubelet[1637]: E0317 18:42:52.747578 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:52.750482 kubelet[1637]: W0317 18:42:52.750370 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.61.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.750482 kubelet[1637]: E0317 18:42:52.750443 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.61.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.751047 env[1192]: time="2025-03-17T18:42:52.751001113Z" level=info msg="CreateContainer within sandbox \"f4fc0ec9ce059f99ae8e7d0aaed1bb0460aa671dcf98fc202665e093b27a41fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:42:52.758007 env[1192]: time="2025-03-17T18:42:52.757898535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-8-addee6c60b,Uid:9dc9f43c8778da066267aee3ff54b476,Namespace:kube-system,Attempt:0,} returns sandbox id \"90c6b86b6e662d833cbd4162b42bb19743cfb2e404da44d197cb58fa6fb58699\"" Mar 17 18:42:52.759808 kubelet[1637]: E0317 18:42:52.759388 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:52.763746 env[1192]: time="2025-03-17T18:42:52.763671627Z" level=info msg="CreateContainer within sandbox \"90c6b86b6e662d833cbd4162b42bb19743cfb2e404da44d197cb58fa6fb58699\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:42:52.782337 env[1192]: time="2025-03-17T18:42:52.782246585Z" level=info msg="CreateContainer within sandbox \"f4fc0ec9ce059f99ae8e7d0aaed1bb0460aa671dcf98fc202665e093b27a41fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ad340468f0d121c9b3efb805e03650f17d479e47bcf234f12c6774e921a33d7\"" Mar 17 18:42:52.785026 env[1192]: time="2025-03-17T18:42:52.784963393Z" level=info msg="StartContainer for \"3ad340468f0d121c9b3efb805e03650f17d479e47bcf234f12c6774e921a33d7\"" Mar 17 18:42:52.793995 env[1192]: time="2025-03-17T18:42:52.793913737Z" level=info msg="CreateContainer within sandbox \"90c6b86b6e662d833cbd4162b42bb19743cfb2e404da44d197cb58fa6fb58699\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"add8d8893a3dbc80bd9f615c527223934216c8e98bda6fa7db48cc567cb57fcf\"" Mar 17 18:42:52.795143 env[1192]: time="2025-03-17T18:42:52.795076809Z" level=info msg="StartContainer for \"add8d8893a3dbc80bd9f615c527223934216c8e98bda6fa7db48cc567cb57fcf\"" Mar 17 18:42:52.808772 systemd[1]: Started cri-containerd-8fe9c0aa933e36e23f6531988f1897f8567007b9bc897ff54b0c3533a88c6c43.scope. Mar 17 18:42:52.837052 systemd[1]: Started cri-containerd-3ad340468f0d121c9b3efb805e03650f17d479e47bcf234f12c6774e921a33d7.scope. Mar 17 18:42:52.846729 kubelet[1637]: E0317 18:42:52.846641 1637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.61.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-8-addee6c60b?timeout=10s\": dial tcp 146.190.61.194:6443: connect: connection refused" interval="1.6s" Mar 17 18:42:52.864941 systemd[1]: Started cri-containerd-add8d8893a3dbc80bd9f615c527223934216c8e98bda6fa7db48cc567cb57fcf.scope. Mar 17 18:42:52.907513 kubelet[1637]: W0317 18:42:52.907418 1637 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.61.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.907513 kubelet[1637]: E0317 18:42:52.907516 1637 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.61.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:52.953577 kubelet[1637]: I0317 18:42:52.953526 1637 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:52.957369 kubelet[1637]: E0317 18:42:52.957274 1637 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.61.194:6443/api/v1/nodes\": dial tcp 146.190.61.194:6443: connect: connection refused" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:52.976377 env[1192]: time="2025-03-17T18:42:52.976315446Z" level=info msg="StartContainer for \"8fe9c0aa933e36e23f6531988f1897f8567007b9bc897ff54b0c3533a88c6c43\" returns successfully" Mar 17 18:42:52.994141 env[1192]: time="2025-03-17T18:42:52.994044309Z" level=info msg="StartContainer for \"3ad340468f0d121c9b3efb805e03650f17d479e47bcf234f12c6774e921a33d7\" returns successfully" Mar 17 18:42:53.007193 env[1192]: time="2025-03-17T18:42:53.007114925Z" level=info msg="StartContainer for \"add8d8893a3dbc80bd9f615c527223934216c8e98bda6fa7db48cc567cb57fcf\" returns successfully" Mar 17 18:42:53.435124 sshd[1633]: Failed password for invalid user ubuntu from 92.118.39.87 port 47820 ssh2 Mar 17 18:42:53.545457 kubelet[1637]: E0317 18:42:53.545370 1637 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.61.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.61.194:6443: connect: connection refused Mar 17 18:42:53.550579 kubelet[1637]: E0317 18:42:53.550534 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:53.553827 kubelet[1637]: E0317 18:42:53.553791 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:53.561918 kubelet[1637]: E0317 18:42:53.561875 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:53.603497 sshd[1633]: Connection closed by invalid user ubuntu 92.118.39.87 port 47820 [preauth] Mar 17 18:42:53.605167 systemd[1]: sshd@5-146.190.61.194:22-92.118.39.87:47820.service: Deactivated successfully. Mar 17 18:42:54.569786 kubelet[1637]: E0317 18:42:54.569736 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:54.570749 kubelet[1637]: I0317 18:42:54.570718 1637 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:55.410451 kubelet[1637]: E0317 18:42:55.410376 1637 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-8-addee6c60b\" not found" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:55.571022 kubelet[1637]: I0317 18:42:55.570953 1637 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:42:55.589384 kubelet[1637]: E0317 18:42:55.589287 1637 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.7-8-addee6c60b\" not found" Mar 17 18:42:55.723324 kubelet[1637]: E0317 18:42:55.690085 1637 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.7-8-addee6c60b\" not found" Mar 17 18:42:56.242629 kubelet[1637]: E0317 18:42:56.242575 1637 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:42:56.243793 kubelet[1637]: E0317 18:42:56.243738 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:56.396900 kubelet[1637]: I0317 18:42:56.396822 1637 apiserver.go:52] "Watching apiserver" Mar 17 18:42:56.437780 kubelet[1637]: I0317 18:42:56.437691 1637 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:42:58.321321 systemd[1]: Reloading. Mar 17 18:42:58.471248 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2025-03-17T18:42:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:58.471279 /usr/lib/systemd/system-generators/torcx-generator[1926]: time="2025-03-17T18:42:58Z" level=info msg="torcx already run" Mar 17 18:42:58.596315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:58.597211 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:58.625618 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:58.691253 kubelet[1637]: W0317 18:42:58.691211 1637 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:58.692263 kubelet[1637]: E0317 18:42:58.692226 1637 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:42:58.813994 kubelet[1637]: E0317 18:42:58.813365 1637 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510.3.7-8-addee6c60b.182dab4b7314aa3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-8-addee6c60b,UID:ci-3510.3.7-8-addee6c60b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-8-addee6c60b,},FirstTimestamp:2025-03-17 18:42:51.403881021 +0000 UTC m=+0.557751865,LastTimestamp:2025-03-17 18:42:51.403881021 +0000 UTC m=+0.557751865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-8-addee6c60b,}" Mar 17 18:42:58.813783 systemd[1]: Stopping kubelet.service... Mar 17 18:42:58.830493 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:42:58.830767 systemd[1]: Stopped kubelet.service. Mar 17 18:42:58.830839 systemd[1]: kubelet.service: Consumed 1.052s CPU time. Mar 17 18:42:58.836546 systemd[1]: Starting kubelet.service... Mar 17 18:43:00.012123 systemd[1]: Started kubelet.service. Mar 17 18:43:00.125669 kubelet[1974]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:00.125669 kubelet[1974]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:43:00.125669 kubelet[1974]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:00.126463 kubelet[1974]: I0317 18:43:00.125706 1974 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:43:00.139160 kubelet[1974]: I0317 18:43:00.139100 1974 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:43:00.139447 kubelet[1974]: I0317 18:43:00.139428 1974 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:43:00.139946 kubelet[1974]: I0317 18:43:00.139918 1974 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:43:00.144758 kubelet[1974]: I0317 18:43:00.144705 1974 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:43:00.152180 kubelet[1974]: I0317 18:43:00.151840 1974 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:43:00.168049 kubelet[1974]: I0317 18:43:00.167998 1974 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:43:00.168400 kubelet[1974]: I0317 18:43:00.168350 1974 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:43:00.170760 kubelet[1974]: I0317 18:43:00.168387 1974 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-8-addee6c60b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:43:00.170760 kubelet[1974]: I0317 18:43:00.170446 1974 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:43:00.170760 kubelet[1974]: I0317 18:43:00.170510 1974 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:43:00.170760 kubelet[1974]: I0317 18:43:00.170635 1974 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:00.173417 kubelet[1974]: I0317 18:43:00.173374 1974 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:43:00.173829 kubelet[1974]: I0317 18:43:00.173800 1974 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:43:00.174036 kubelet[1974]: I0317 18:43:00.174006 1974 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:43:00.174151 kubelet[1974]: I0317 18:43:00.174135 1974 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:43:00.176720 kubelet[1974]: I0317 18:43:00.176664 1974 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:43:00.177243 kubelet[1974]: I0317 18:43:00.177216 1974 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:43:00.177984 kubelet[1974]: I0317 18:43:00.177959 1974 server.go:1264] "Started kubelet" Mar 17 18:43:00.181102 kubelet[1974]: I0317 18:43:00.181059 1974 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:43:00.193088 kubelet[1974]: I0317 18:43:00.193015 1974 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:43:00.209581 kubelet[1974]: I0317 18:43:00.209535 1974 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:43:00.214952 kubelet[1974]: I0317 18:43:00.197790 1974 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:43:00.232910 sudo[1995]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:43:00.233278 sudo[1995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:43:00.235988 kubelet[1974]: I0317 18:43:00.193272 1974 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:43:00.237201 kubelet[1974]: I0317 18:43:00.237159 1974 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:43:00.239707 kubelet[1974]: I0317 18:43:00.197813 1974 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:43:00.239707 kubelet[1974]: I0317 18:43:00.238432 1974 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:43:00.248419 kubelet[1974]: I0317 18:43:00.248376 1974 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:43:00.248683 kubelet[1974]: I0317 18:43:00.248661 1974 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:43:00.251576 kubelet[1974]: I0317 18:43:00.251518 1974 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:43:00.274028 kubelet[1974]: E0317 18:43:00.271498 1974 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:43:00.281469 kubelet[1974]: I0317 18:43:00.281080 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:43:00.284900 kubelet[1974]: I0317 18:43:00.284125 1974 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:43:00.284900 kubelet[1974]: I0317 18:43:00.284182 1974 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:43:00.284900 kubelet[1974]: I0317 18:43:00.284218 1974 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:43:00.284900 kubelet[1974]: E0317 18:43:00.284284 1974 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:43:00.305649 kubelet[1974]: I0317 18:43:00.303894 1974 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.342719 kubelet[1974]: I0317 18:43:00.342666 1974 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.342994 kubelet[1974]: I0317 18:43:00.342807 1974 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.384838 kubelet[1974]: E0317 18:43:00.384785 1974 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:43:00.402189 kubelet[1974]: I0317 18:43:00.402139 1974 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:43:00.402189 kubelet[1974]: I0317 18:43:00.402169 1974 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:43:00.402500 kubelet[1974]: I0317 18:43:00.402251 1974 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:00.402500 kubelet[1974]: I0317 18:43:00.402462 1974 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:43:00.402624 kubelet[1974]: I0317 18:43:00.402487 1974 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:43:00.402624 kubelet[1974]: I0317 18:43:00.402514 1974 policy_none.go:49] "None policy: Start" Mar 17 18:43:00.403832 kubelet[1974]: I0317 18:43:00.403791 1974 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:43:00.403832 kubelet[1974]: I0317 18:43:00.403861 1974 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:43:00.404635 kubelet[1974]: I0317 18:43:00.404223 1974 state_mem.go:75] "Updated machine memory state" Mar 17 18:43:00.413291 kubelet[1974]: I0317 18:43:00.413247 1974 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:43:00.413800 kubelet[1974]: I0317 18:43:00.413734 1974 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:43:00.414422 kubelet[1974]: I0317 18:43:00.414393 1974 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:43:00.585551 kubelet[1974]: I0317 18:43:00.585251 1974 topology_manager.go:215] "Topology Admit Handler" podUID="f6cd0febee0a3753e4200e3c7c8fe9f1" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.585551 kubelet[1974]: I0317 18:43:00.585430 1974 topology_manager.go:215] "Topology Admit Handler" podUID="9dc9f43c8778da066267aee3ff54b476" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.585833 kubelet[1974]: I0317 18:43:00.585606 1974 topology_manager.go:215] "Topology Admit Handler" podUID="70211c3ef14bb94f1540b17116479bba" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.607204 kubelet[1974]: W0317 18:43:00.607144 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:43:00.607536 kubelet[1974]: W0317 18:43:00.607507 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:43:00.613555 kubelet[1974]: W0317 18:43:00.613479 1974 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:43:00.613758 kubelet[1974]: E0317 18:43:00.613623 1974 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-8-addee6c60b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.648308 kubelet[1974]: I0317 18:43:00.648241 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.648837 kubelet[1974]: I0317 18:43:00.648786 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.649178 kubelet[1974]: I0317 18:43:00.649143 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.649370 kubelet[1974]: I0317 18:43:00.649341 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6cd0febee0a3753e4200e3c7c8fe9f1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-8-addee6c60b\" (UID: \"f6cd0febee0a3753e4200e3c7c8fe9f1\") " pod="kube-system/kube-scheduler-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.649517 kubelet[1974]: I0317 18:43:00.649492 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dc9f43c8778da066267aee3ff54b476-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-8-addee6c60b\" (UID: \"9dc9f43c8778da066267aee3ff54b476\") " pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.649701 kubelet[1974]: I0317 18:43:00.649676 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.649841 kubelet[1974]: I0317 18:43:00.649817 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.650046 kubelet[1974]: I0317 18:43:00.650021 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.650220 kubelet[1974]: I0317 18:43:00.650193 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70211c3ef14bb94f1540b17116479bba-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-8-addee6c60b\" (UID: \"70211c3ef14bb94f1540b17116479bba\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" Mar 17 18:43:00.910883 kubelet[1974]: E0317 18:43:00.909494 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:00.911520 kubelet[1974]: E0317 18:43:00.911475 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:00.915262 kubelet[1974]: E0317 18:43:00.915218 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:01.175557 kubelet[1974]: I0317 18:43:01.175377 1974 apiserver.go:52] "Watching apiserver" Mar 17 18:43:01.240897 kubelet[1974]: I0317 18:43:01.240787 1974 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:43:01.254612 sudo[1995]: pam_unix(sudo:session): session closed for user root Mar 17 18:43:01.357669 kubelet[1974]: E0317 18:43:01.357609 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:01.360058 kubelet[1974]: E0317 18:43:01.359976 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:01.360335 kubelet[1974]: E0317 18:43:01.360042 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:01.402928 kubelet[1974]: I0317 18:43:01.402806 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-8-addee6c60b" podStartSLOduration=1.402771952 podStartE2EDuration="1.402771952s" podCreationTimestamp="2025-03-17 18:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:01.402374055 +0000 UTC m=+1.371529976" watchObservedRunningTime="2025-03-17 18:43:01.402771952 +0000 UTC m=+1.371927875" Mar 17 18:43:01.422166 kubelet[1974]: I0317 18:43:01.422073 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-8-addee6c60b" podStartSLOduration=3.422043803 podStartE2EDuration="3.422043803s" podCreationTimestamp="2025-03-17 18:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:01.42191376 +0000 UTC m=+1.391069709" watchObservedRunningTime="2025-03-17 18:43:01.422043803 +0000 UTC m=+1.391199726" Mar 17 18:43:01.486076 kubelet[1974]: I0317 18:43:01.485983 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-8-addee6c60b" podStartSLOduration=1.4859615 podStartE2EDuration="1.4859615s" podCreationTimestamp="2025-03-17 18:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:01.455961464 +0000 UTC m=+1.425117385" watchObservedRunningTime="2025-03-17 18:43:01.4859615 +0000 UTC m=+1.455117413" Mar 17 18:43:02.359727 kubelet[1974]: E0317 18:43:02.359677 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:03.821455 systemd-timesyncd[1137]: Contacted time server 192.48.105.15:123 (2.flatcar.pool.ntp.org). Mar 17 18:43:03.821537 systemd-timesyncd[1137]: Initial clock synchronization to Mon 2025-03-17 18:43:03.820990 UTC. Mar 17 18:43:03.821640 systemd-resolved[1136]: Clock change detected. Flushing caches. Mar 17 18:43:04.453835 sudo[1297]: pam_unix(sudo:session): session closed for user root Mar 17 18:43:04.458561 sshd[1294]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:04.463353 systemd[1]: sshd@4-146.190.61.194:22-139.178.68.195:38254.service: Deactivated successfully. Mar 17 18:43:04.464529 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:43:04.464714 systemd[1]: session-5.scope: Consumed 5.903s CPU time. Mar 17 18:43:04.465420 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:43:04.467383 systemd-logind[1183]: Removed session 5. Mar 17 18:43:08.256594 kubelet[1974]: E0317 18:43:08.256520 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:08.711880 kubelet[1974]: E0317 18:43:08.711825 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:10.686372 kubelet[1974]: E0317 18:43:10.686316 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:10.716647 kubelet[1974]: E0317 18:43:10.716609 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:11.561240 kubelet[1974]: E0317 18:43:11.561196 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:11.719119 kubelet[1974]: E0317 18:43:11.719058 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:11.720260 kubelet[1974]: E0317 18:43:11.720211 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:12.978075 kubelet[1974]: I0317 18:43:12.978026 1974 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:43:12.978620 env[1192]: time="2025-03-17T18:43:12.978492038Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:43:12.978989 kubelet[1974]: I0317 18:43:12.978781 1974 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:43:13.737062 kubelet[1974]: I0317 18:43:13.736966 1974 topology_manager.go:215] "Topology Admit Handler" podUID="50022a0e-d4d3-4900-aa68-c5adaa7e2651" podNamespace="kube-system" podName="kube-proxy-clgmf" Mar 17 18:43:13.746742 systemd[1]: Created slice kubepods-besteffort-pod50022a0e_d4d3_4900_aa68_c5adaa7e2651.slice. Mar 17 18:43:13.751702 kubelet[1974]: W0317 18:43:13.751549 1974 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.751702 kubelet[1974]: E0317 18:43:13.751649 1974 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.752951 kubelet[1974]: W0317 18:43:13.752491 1974 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.752951 kubelet[1974]: E0317 18:43:13.752541 1974 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.753646 kubelet[1974]: I0317 18:43:13.753603 1974 topology_manager.go:215] "Topology Admit Handler" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" podNamespace="kube-system" podName="cilium-tzl8d" Mar 17 18:43:13.762899 systemd[1]: Created slice kubepods-burstable-podbfe69589_6d6b_4f2a_aca3_a095a04dbfcb.slice. Mar 17 18:43:13.771864 kubelet[1974]: W0317 18:43:13.771809 1974 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.771864 kubelet[1974]: E0317 18:43:13.771855 1974 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.772274 kubelet[1974]: W0317 18:43:13.771827 1974 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.772396 kubelet[1974]: E0317 18:43:13.772378 1974 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.774293 kubelet[1974]: W0317 18:43:13.774229 1974 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.774618 kubelet[1974]: E0317 18:43:13.774565 1974 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-8-addee6c60b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-8-addee6c60b' and this object Mar 17 18:43:13.776405 kubelet[1974]: I0317 18:43:13.776351 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-bpf-maps\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.776686 kubelet[1974]: I0317 18:43:13.776656 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hostproc\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.776858 kubelet[1974]: I0317 18:43:13.776828 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cni-path\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.777083 kubelet[1974]: I0317 18:43:13.777055 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-net\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.777279 kubelet[1974]: I0317 18:43:13.777249 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-proxy\") pod \"kube-proxy-clgmf\" (UID: \"50022a0e-d4d3-4900-aa68-c5adaa7e2651\") " pod="kube-system/kube-proxy-clgmf" Mar 17 18:43:13.777449 kubelet[1974]: I0317 18:43:13.777416 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgxb\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.777641 kubelet[1974]: I0317 18:43:13.777617 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndvm\" (UniqueName: \"kubernetes.io/projected/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-api-access-tndvm\") pod \"kube-proxy-clgmf\" (UID: \"50022a0e-d4d3-4900-aa68-c5adaa7e2651\") " pod="kube-system/kube-proxy-clgmf" Mar 17 18:43:13.777855 kubelet[1974]: I0317 18:43:13.777821 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hubble-tls\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.778067 kubelet[1974]: I0317 18:43:13.778028 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50022a0e-d4d3-4900-aa68-c5adaa7e2651-lib-modules\") pod \"kube-proxy-clgmf\" (UID: \"50022a0e-d4d3-4900-aa68-c5adaa7e2651\") " pod="kube-system/kube-proxy-clgmf" Mar 17 18:43:13.778246 kubelet[1974]: I0317 18:43:13.778212 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-cgroup\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.778434 kubelet[1974]: I0317 18:43:13.778383 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-etc-cni-netd\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.778648 kubelet[1974]: I0317 18:43:13.778623 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-lib-modules\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.778797 kubelet[1974]: I0317 18:43:13.778780 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.778926 kubelet[1974]: I0317 18:43:13.778909 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50022a0e-d4d3-4900-aa68-c5adaa7e2651-xtables-lock\") pod \"kube-proxy-clgmf\" (UID: \"50022a0e-d4d3-4900-aa68-c5adaa7e2651\") " pod="kube-system/kube-proxy-clgmf" Mar 17 18:43:13.779067 kubelet[1974]: I0317 18:43:13.779045 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-run\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.779254 kubelet[1974]: I0317 18:43:13.779219 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-xtables-lock\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.779479 kubelet[1974]: I0317 18:43:13.779446 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:13.779654 kubelet[1974]: I0317 18:43:13.779631 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-kernel\") pod \"cilium-tzl8d\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " pod="kube-system/cilium-tzl8d" Mar 17 18:43:14.025061 kubelet[1974]: I0317 18:43:14.024831 1974 topology_manager.go:215] "Topology Admit Handler" podUID="6a8e89d9-173c-4b92-b380-8c24b2558912" podNamespace="kube-system" podName="cilium-operator-599987898-dw625" Mar 17 18:43:14.035490 systemd[1]: Created slice kubepods-besteffort-pod6a8e89d9_173c_4b92_b380_8c24b2558912.slice. Mar 17 18:43:14.088160 kubelet[1974]: I0317 18:43:14.088076 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path\") pod \"cilium-operator-599987898-dw625\" (UID: \"6a8e89d9-173c-4b92-b380-8c24b2558912\") " pod="kube-system/cilium-operator-599987898-dw625" Mar 17 18:43:14.088160 kubelet[1974]: I0317 18:43:14.088159 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzmm\" (UniqueName: \"kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm\") pod \"cilium-operator-599987898-dw625\" (UID: \"6a8e89d9-173c-4b92-b380-8c24b2558912\") " pod="kube-system/cilium-operator-599987898-dw625" Mar 17 18:43:14.880785 kubelet[1974]: E0317 18:43:14.880741 1974 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.881471 kubelet[1974]: E0317 18:43:14.881444 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-proxy podName:50022a0e-d4d3-4900-aa68-c5adaa7e2651 nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.381412218 +0000 UTC m=+14.009848432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-proxy") pod "kube-proxy-clgmf" (UID: "50022a0e-d4d3-4900-aa68-c5adaa7e2651") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.883149 kubelet[1974]: E0317 18:43:14.883103 1974 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.883499 kubelet[1974]: E0317 18:43:14.883462 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path podName:bfe69589-6d6b-4f2a-aca3-a095a04dbfcb nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.383433133 +0000 UTC m=+14.011869356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path") pod "cilium-tzl8d" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.887414 kubelet[1974]: E0317 18:43:14.887350 1974 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:43:14.888274 kubelet[1974]: E0317 18:43:14.888234 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets podName:bfe69589-6d6b-4f2a-aca3-a095a04dbfcb nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.388206136 +0000 UTC m=+14.016642350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets") pod "cilium-tzl8d" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:43:14.895171 kubelet[1974]: E0317 18:43:14.895116 1974 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.895512 kubelet[1974]: E0317 18:43:14.895480 1974 projected.go:200] Error preparing data for projected volume kube-api-access-tndvm for pod kube-system/kube-proxy-clgmf: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.896146 kubelet[1974]: E0317 18:43:14.896115 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-api-access-tndvm podName:50022a0e-d4d3-4900-aa68-c5adaa7e2651 nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.396083321 +0000 UTC m=+14.024519543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tndvm" (UniqueName: "kubernetes.io/projected/50022a0e-d4d3-4900-aa68-c5adaa7e2651-kube-api-access-tndvm") pod "kube-proxy-clgmf" (UID: "50022a0e-d4d3-4900-aa68-c5adaa7e2651") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.896473 kubelet[1974]: E0317 18:43:14.896400 1974 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.896667 kubelet[1974]: E0317 18:43:14.896620 1974 projected.go:200] Error preparing data for projected volume kube-api-access-prgxb for pod kube-system/cilium-tzl8d: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:14.896868 kubelet[1974]: E0317 18:43:14.896848 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb podName:bfe69589-6d6b-4f2a-aca3-a095a04dbfcb nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.396825757 +0000 UTC m=+14.025261978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-prgxb" (UniqueName: "kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb") pod "cilium-tzl8d" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.140043 update_engine[1185]: I0317 18:43:15.139830 1185 update_attempter.cc:509] Updating boot flags... Mar 17 18:43:15.189409 kubelet[1974]: E0317 18:43:15.188928 1974 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.189409 kubelet[1974]: E0317 18:43:15.189070 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path podName:6a8e89d9-173c-4b92-b380-8c24b2558912 nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.689035803 +0000 UTC m=+14.317472021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path") pod "cilium-operator-599987898-dw625" (UID: "6a8e89d9-173c-4b92-b380-8c24b2558912") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.199669 kubelet[1974]: E0317 18:43:15.199151 1974 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.199669 kubelet[1974]: E0317 18:43:15.199208 1974 projected.go:200] Error preparing data for projected volume kube-api-access-zzzmm for pod kube-system/cilium-operator-599987898-dw625: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.199669 kubelet[1974]: E0317 18:43:15.199297 1974 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm podName:6a8e89d9-173c-4b92-b380-8c24b2558912 nodeName:}" failed. No retries permitted until 2025-03-17 18:43:15.699269246 +0000 UTC m=+14.327705473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zzzmm" (UniqueName: "kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm") pod "cilium-operator-599987898-dw625" (UID: "6a8e89d9-173c-4b92-b380-8c24b2558912") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:43:15.555378 kubelet[1974]: E0317 18:43:15.555227 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:15.560662 env[1192]: time="2025-03-17T18:43:15.559984409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clgmf,Uid:50022a0e-d4d3-4900-aa68-c5adaa7e2651,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:15.568633 kubelet[1974]: E0317 18:43:15.568545 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:15.570322 env[1192]: time="2025-03-17T18:43:15.570233846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzl8d,Uid:bfe69589-6d6b-4f2a-aca3-a095a04dbfcb,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:15.605744 env[1192]: time="2025-03-17T18:43:15.605407459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:15.605744 env[1192]: time="2025-03-17T18:43:15.605491510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:15.605744 env[1192]: time="2025-03-17T18:43:15.605508523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:15.606131 env[1192]: time="2025-03-17T18:43:15.605792925Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98044a9b65ca41140a0e77c44b8d906077b76251132f010c5c328c857883906e pid=2070 runtime=io.containerd.runc.v2 Mar 17 18:43:15.619755 env[1192]: time="2025-03-17T18:43:15.619511645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:15.619755 env[1192]: time="2025-03-17T18:43:15.619657417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:15.620161 env[1192]: time="2025-03-17T18:43:15.620088688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:15.620631 env[1192]: time="2025-03-17T18:43:15.620556781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f pid=2087 runtime=io.containerd.runc.v2 Mar 17 18:43:15.632860 systemd[1]: Started cri-containerd-98044a9b65ca41140a0e77c44b8d906077b76251132f010c5c328c857883906e.scope. Mar 17 18:43:15.694786 systemd[1]: Started cri-containerd-66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f.scope. Mar 17 18:43:15.747683 env[1192]: time="2025-03-17T18:43:15.747624729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clgmf,Uid:50022a0e-d4d3-4900-aa68-c5adaa7e2651,Namespace:kube-system,Attempt:0,} returns sandbox id \"98044a9b65ca41140a0e77c44b8d906077b76251132f010c5c328c857883906e\"" Mar 17 18:43:15.751537 kubelet[1974]: E0317 18:43:15.750663 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:15.770742 env[1192]: time="2025-03-17T18:43:15.770685534Z" level=info msg="CreateContainer within sandbox \"98044a9b65ca41140a0e77c44b8d906077b76251132f010c5c328c857883906e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:43:15.781635 env[1192]: time="2025-03-17T18:43:15.780740107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzl8d,Uid:bfe69589-6d6b-4f2a-aca3-a095a04dbfcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\"" Mar 17 18:43:15.783232 kubelet[1974]: E0317 18:43:15.783192 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:15.790440 env[1192]: time="2025-03-17T18:43:15.790366419Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:43:15.808605 env[1192]: time="2025-03-17T18:43:15.807217980Z" level=info msg="CreateContainer within sandbox \"98044a9b65ca41140a0e77c44b8d906077b76251132f010c5c328c857883906e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4c33cf37333638e0ffa4de1f095e2af80cccf902fe3c847ce2152c959cfc0653\"" Mar 17 18:43:15.811516 env[1192]: time="2025-03-17T18:43:15.811440533Z" level=info msg="StartContainer for \"4c33cf37333638e0ffa4de1f095e2af80cccf902fe3c847ce2152c959cfc0653\"" Mar 17 18:43:15.839114 systemd[1]: Started cri-containerd-4c33cf37333638e0ffa4de1f095e2af80cccf902fe3c847ce2152c959cfc0653.scope. Mar 17 18:43:15.841329 kubelet[1974]: E0317 18:43:15.840221 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:15.846527 env[1192]: time="2025-03-17T18:43:15.846476918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dw625,Uid:6a8e89d9-173c-4b92-b380-8c24b2558912,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:15.885329 env[1192]: time="2025-03-17T18:43:15.885068784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:15.885553 env[1192]: time="2025-03-17T18:43:15.885403885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:15.885553 env[1192]: time="2025-03-17T18:43:15.885450429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:15.886711 env[1192]: time="2025-03-17T18:43:15.885920399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0 pid=2176 runtime=io.containerd.runc.v2 Mar 17 18:43:15.910868 systemd[1]: Started cri-containerd-a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0.scope. Mar 17 18:43:15.920880 env[1192]: time="2025-03-17T18:43:15.920816884Z" level=info msg="StartContainer for \"4c33cf37333638e0ffa4de1f095e2af80cccf902fe3c847ce2152c959cfc0653\" returns successfully" Mar 17 18:43:16.006188 env[1192]: time="2025-03-17T18:43:16.006094624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dw625,Uid:6a8e89d9-173c-4b92-b380-8c24b2558912,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\"" Mar 17 18:43:16.007686 kubelet[1974]: E0317 18:43:16.007484 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:16.755368 kubelet[1974]: E0317 18:43:16.755272 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:21.662763 kubelet[1974]: I0317 18:43:21.662661 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clgmf" podStartSLOduration=8.662618684 podStartE2EDuration="8.662618684s" podCreationTimestamp="2025-03-17 18:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:16.784796913 +0000 UTC m=+15.413233137" watchObservedRunningTime="2025-03-17 18:43:21.662618684 +0000 UTC m=+20.291054909" Mar 17 18:43:23.056176 systemd[1]: Started sshd@6-146.190.61.194:22-218.92.0.158:48649.service. Mar 17 18:43:23.921275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193185060.mount: Deactivated successfully. Mar 17 18:43:24.062512 sshd[2344]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Mar 17 18:43:26.076810 sshd[2344]: Failed password for root from 218.92.0.158 port 48649 ssh2 Mar 17 18:43:27.610333 env[1192]: time="2025-03-17T18:43:27.610242022Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:27.611792 env[1192]: time="2025-03-17T18:43:27.611743350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:27.616191 env[1192]: time="2025-03-17T18:43:27.616124265Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:43:27.616590 env[1192]: time="2025-03-17T18:43:27.615337129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:27.619193 env[1192]: time="2025-03-17T18:43:27.619130417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:43:27.622900 env[1192]: time="2025-03-17T18:43:27.622833838Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:43:27.644399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774994990.mount: Deactivated successfully. Mar 17 18:43:27.654556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681329712.mount: Deactivated successfully. Mar 17 18:43:27.656048 env[1192]: time="2025-03-17T18:43:27.655867853Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\"" Mar 17 18:43:27.660221 env[1192]: time="2025-03-17T18:43:27.659012340Z" level=info msg="StartContainer for \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\"" Mar 17 18:43:27.687092 systemd[1]: Started cri-containerd-827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993.scope. Mar 17 18:43:27.742388 env[1192]: time="2025-03-17T18:43:27.742315524Z" level=info msg="StartContainer for \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\" returns successfully" Mar 17 18:43:27.757768 systemd[1]: cri-containerd-827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993.scope: Deactivated successfully. Mar 17 18:43:27.813312 env[1192]: time="2025-03-17T18:43:27.813206646Z" level=info msg="shim disconnected" id=827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993 Mar 17 18:43:27.813312 env[1192]: time="2025-03-17T18:43:27.813281725Z" level=warning msg="cleaning up after shim disconnected" id=827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993 namespace=k8s.io Mar 17 18:43:27.813312 env[1192]: time="2025-03-17T18:43:27.813297675Z" level=info msg="cleaning up dead shim" Mar 17 18:43:27.827686 env[1192]: time="2025-03-17T18:43:27.827578591Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2392 runtime=io.containerd.runc.v2\n" Mar 17 18:43:27.837229 kubelet[1974]: E0317 18:43:27.837178 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:27.842671 env[1192]: time="2025-03-17T18:43:27.842607780Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:43:27.877705 env[1192]: time="2025-03-17T18:43:27.876421188Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\"" Mar 17 18:43:27.879410 env[1192]: time="2025-03-17T18:43:27.879029999Z" level=info msg="StartContainer for \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\"" Mar 17 18:43:27.914813 systemd[1]: Started cri-containerd-874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a.scope. Mar 17 18:43:27.969854 env[1192]: time="2025-03-17T18:43:27.969782264Z" level=info msg="StartContainer for \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\" returns successfully" Mar 17 18:43:27.991427 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:43:27.992774 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:43:27.993451 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:43:28.002203 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:43:28.004423 systemd[1]: cri-containerd-874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a.scope: Deactivated successfully. Mar 17 18:43:28.022150 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:43:28.047093 env[1192]: time="2025-03-17T18:43:28.047009573Z" level=info msg="shim disconnected" id=874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a Mar 17 18:43:28.047093 env[1192]: time="2025-03-17T18:43:28.047075660Z" level=warning msg="cleaning up after shim disconnected" id=874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a namespace=k8s.io Mar 17 18:43:28.047093 env[1192]: time="2025-03-17T18:43:28.047090534Z" level=info msg="cleaning up dead shim" Mar 17 18:43:28.061450 env[1192]: time="2025-03-17T18:43:28.061370698Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2461 runtime=io.containerd.runc.v2\n" Mar 17 18:43:28.640752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993-rootfs.mount: Deactivated successfully. Mar 17 18:43:28.842715 kubelet[1974]: E0317 18:43:28.842679 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:28.854740 env[1192]: time="2025-03-17T18:43:28.853618246Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:43:28.896864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167778337.mount: Deactivated successfully. Mar 17 18:43:28.916273 env[1192]: time="2025-03-17T18:43:28.916174755Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\"" Mar 17 18:43:28.921262 env[1192]: time="2025-03-17T18:43:28.920781475Z" level=info msg="StartContainer for \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\"" Mar 17 18:43:28.969825 systemd[1]: Started cri-containerd-283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7.scope. Mar 17 18:43:29.029502 env[1192]: time="2025-03-17T18:43:29.029441430Z" level=info msg="StartContainer for \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\" returns successfully" Mar 17 18:43:29.035070 systemd[1]: cri-containerd-283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7.scope: Deactivated successfully. Mar 17 18:43:29.073898 env[1192]: time="2025-03-17T18:43:29.073686100Z" level=info msg="shim disconnected" id=283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7 Mar 17 18:43:29.074522 env[1192]: time="2025-03-17T18:43:29.074469737Z" level=warning msg="cleaning up after shim disconnected" id=283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7 namespace=k8s.io Mar 17 18:43:29.074840 env[1192]: time="2025-03-17T18:43:29.074806622Z" level=info msg="cleaning up dead shim" Mar 17 18:43:29.090855 env[1192]: time="2025-03-17T18:43:29.090758514Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2519 runtime=io.containerd.runc.v2\n" Mar 17 18:43:29.640096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7-rootfs.mount: Deactivated successfully. Mar 17 18:43:29.849881 kubelet[1974]: E0317 18:43:29.848149 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:29.870351 env[1192]: time="2025-03-17T18:43:29.870299319Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:43:29.917923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719996880.mount: Deactivated successfully. Mar 17 18:43:29.933349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109917777.mount: Deactivated successfully. Mar 17 18:43:29.953693 env[1192]: time="2025-03-17T18:43:29.953618663Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\"" Mar 17 18:43:29.957697 env[1192]: time="2025-03-17T18:43:29.957644018Z" level=info msg="StartContainer for \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\"" Mar 17 18:43:30.003062 systemd[1]: Started cri-containerd-343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d.scope. Mar 17 18:43:30.094492 env[1192]: time="2025-03-17T18:43:30.094414105Z" level=info msg="StartContainer for \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\" returns successfully" Mar 17 18:43:30.098750 systemd[1]: cri-containerd-343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d.scope: Deactivated successfully. Mar 17 18:43:30.204126 env[1192]: time="2025-03-17T18:43:30.203953349Z" level=info msg="shim disconnected" id=343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d Mar 17 18:43:30.204636 env[1192]: time="2025-03-17T18:43:30.204587243Z" level=warning msg="cleaning up after shim disconnected" id=343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d namespace=k8s.io Mar 17 18:43:30.204877 env[1192]: time="2025-03-17T18:43:30.204845947Z" level=info msg="cleaning up dead shim" Mar 17 18:43:30.224317 env[1192]: time="2025-03-17T18:43:30.224252374Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" Mar 17 18:43:30.369672 sshd[2344]: Failed password for root from 218.92.0.158 port 48649 ssh2 Mar 17 18:43:30.487056 env[1192]: time="2025-03-17T18:43:30.486857579Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:30.489892 env[1192]: time="2025-03-17T18:43:30.489821771Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:30.492087 env[1192]: time="2025-03-17T18:43:30.492012270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:30.493624 env[1192]: time="2025-03-17T18:43:30.493512812Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:43:30.499250 env[1192]: time="2025-03-17T18:43:30.499167424Z" level=info msg="CreateContainer within sandbox \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:43:30.513107 env[1192]: time="2025-03-17T18:43:30.512719090Z" level=info msg="CreateContainer within sandbox \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\"" Mar 17 18:43:30.515628 env[1192]: time="2025-03-17T18:43:30.514879228Z" level=info msg="StartContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\"" Mar 17 18:43:30.543889 systemd[1]: Started cri-containerd-ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff.scope. Mar 17 18:43:30.600629 env[1192]: time="2025-03-17T18:43:30.600494221Z" level=info msg="StartContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" returns successfully" Mar 17 18:43:30.853275 kubelet[1974]: E0317 18:43:30.853162 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:30.859257 kubelet[1974]: E0317 18:43:30.859205 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:30.863849 env[1192]: time="2025-03-17T18:43:30.863755359Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:43:30.899388 env[1192]: time="2025-03-17T18:43:30.899281889Z" level=info msg="CreateContainer within sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\"" Mar 17 18:43:30.901352 env[1192]: time="2025-03-17T18:43:30.901248303Z" level=info msg="StartContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\"" Mar 17 18:43:30.956273 systemd[1]: Started cri-containerd-dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884.scope. Mar 17 18:43:31.121069 env[1192]: time="2025-03-17T18:43:31.120901646Z" level=info msg="StartContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" returns successfully" Mar 17 18:43:31.204601 kubelet[1974]: I0317 18:43:31.204460 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dw625" podStartSLOduration=3.719011357 podStartE2EDuration="18.204415003s" podCreationTimestamp="2025-03-17 18:43:13 +0000 UTC" firstStartedPulling="2025-03-17 18:43:16.010556374 +0000 UTC m=+14.638992575" lastFinishedPulling="2025-03-17 18:43:30.495960005 +0000 UTC m=+29.124396221" observedRunningTime="2025-03-17 18:43:30.993611258 +0000 UTC m=+29.622047480" watchObservedRunningTime="2025-03-17 18:43:31.204415003 +0000 UTC m=+29.832851226" Mar 17 18:43:31.457427 kubelet[1974]: I0317 18:43:31.457075 1974 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:43:31.527272 kubelet[1974]: I0317 18:43:31.527192 1974 topology_manager.go:215] "Topology Admit Handler" podUID="12fb3e7e-7199-4655-9433-67fd95c1d30f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bcmn6" Mar 17 18:43:31.538132 systemd[1]: Created slice kubepods-burstable-pod12fb3e7e_7199_4655_9433_67fd95c1d30f.slice. Mar 17 18:43:31.543920 kubelet[1974]: I0317 18:43:31.543843 1974 topology_manager.go:215] "Topology Admit Handler" podUID="ea4c2c13-350f-4ee1-b261-175b2cd9fd80" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b95g5" Mar 17 18:43:31.552161 systemd[1]: Created slice kubepods-burstable-podea4c2c13_350f_4ee1_b261_175b2cd9fd80.slice. Mar 17 18:43:31.641333 systemd[1]: run-containerd-runc-k8s.io-dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884-runc.TmmUag.mount: Deactivated successfully. Mar 17 18:43:31.655354 kubelet[1974]: I0317 18:43:31.655133 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea4c2c13-350f-4ee1-b261-175b2cd9fd80-config-volume\") pod \"coredns-7db6d8ff4d-b95g5\" (UID: \"ea4c2c13-350f-4ee1-b261-175b2cd9fd80\") " pod="kube-system/coredns-7db6d8ff4d-b95g5" Mar 17 18:43:31.655354 kubelet[1974]: I0317 18:43:31.655203 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjfz\" (UniqueName: \"kubernetes.io/projected/ea4c2c13-350f-4ee1-b261-175b2cd9fd80-kube-api-access-vtjfz\") pod \"coredns-7db6d8ff4d-b95g5\" (UID: \"ea4c2c13-350f-4ee1-b261-175b2cd9fd80\") " pod="kube-system/coredns-7db6d8ff4d-b95g5" Mar 17 18:43:31.655354 kubelet[1974]: I0317 18:43:31.655231 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzs9\" (UniqueName: \"kubernetes.io/projected/12fb3e7e-7199-4655-9433-67fd95c1d30f-kube-api-access-qmzs9\") pod \"coredns-7db6d8ff4d-bcmn6\" (UID: \"12fb3e7e-7199-4655-9433-67fd95c1d30f\") " pod="kube-system/coredns-7db6d8ff4d-bcmn6" Mar 17 18:43:31.655354 kubelet[1974]: I0317 18:43:31.655312 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12fb3e7e-7199-4655-9433-67fd95c1d30f-config-volume\") pod \"coredns-7db6d8ff4d-bcmn6\" (UID: \"12fb3e7e-7199-4655-9433-67fd95c1d30f\") " pod="kube-system/coredns-7db6d8ff4d-bcmn6" Mar 17 18:43:31.892842 kubelet[1974]: E0317 18:43:31.892772 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:31.893720 kubelet[1974]: E0317 18:43:31.893206 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:32.144146 kubelet[1974]: E0317 18:43:32.143990 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:32.145608 env[1192]: time="2025-03-17T18:43:32.145508980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bcmn6,Uid:12fb3e7e-7199-4655-9433-67fd95c1d30f,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:32.164683 kubelet[1974]: E0317 18:43:32.164554 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:32.167669 env[1192]: time="2025-03-17T18:43:32.167338113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95g5,Uid:ea4c2c13-350f-4ee1-b261-175b2cd9fd80,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:32.895374 kubelet[1974]: E0317 18:43:32.895323 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:33.793212 sshd[2344]: Failed password for root from 218.92.0.158 port 48649 ssh2 Mar 17 18:43:33.897310 kubelet[1974]: E0317 18:43:33.897264 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:35.112511 systemd-networkd[1004]: cilium_host: Link UP Mar 17 18:43:35.116804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:43:35.116981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:43:35.116068 systemd-networkd[1004]: cilium_net: Link UP Mar 17 18:43:35.116465 systemd-networkd[1004]: cilium_net: Gained carrier Mar 17 18:43:35.116968 systemd-networkd[1004]: cilium_host: Gained carrier Mar 17 18:43:35.316489 systemd-networkd[1004]: cilium_vxlan: Link UP Mar 17 18:43:35.316499 systemd-networkd[1004]: cilium_vxlan: Gained carrier Mar 17 18:43:35.702867 systemd-networkd[1004]: cilium_host: Gained IPv6LL Mar 17 18:43:35.808613 kernel: NET: Registered PF_ALG protocol family Mar 17 18:43:35.894913 systemd-networkd[1004]: cilium_net: Gained IPv6LL Mar 17 18:43:36.824514 sshd[2344]: Received disconnect from 218.92.0.158 port 48649:11: [preauth] Mar 17 18:43:36.824514 sshd[2344]: Disconnected from authenticating user root 218.92.0.158 port 48649 [preauth] Mar 17 18:43:36.825558 sshd[2344]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Mar 17 18:43:36.827485 systemd[1]: sshd@6-146.190.61.194:22-218.92.0.158:48649.service: Deactivated successfully. Mar 17 18:43:37.046804 systemd-networkd[1004]: cilium_vxlan: Gained IPv6LL Mar 17 18:43:37.132056 systemd-networkd[1004]: lxc_health: Link UP Mar 17 18:43:37.143695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:43:37.142548 systemd-networkd[1004]: lxc_health: Gained carrier Mar 17 18:43:37.577105 kubelet[1974]: E0317 18:43:37.576493 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:37.610160 kubelet[1974]: I0317 18:43:37.610093 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tzl8d" podStartSLOduration=12.780447264 podStartE2EDuration="24.610069494s" podCreationTimestamp="2025-03-17 18:43:13 +0000 UTC" firstStartedPulling="2025-03-17 18:43:15.789257005 +0000 UTC m=+14.417693217" lastFinishedPulling="2025-03-17 18:43:27.618879246 +0000 UTC m=+26.247315447" observedRunningTime="2025-03-17 18:43:32.069461522 +0000 UTC m=+30.697897743" watchObservedRunningTime="2025-03-17 18:43:37.610069494 +0000 UTC m=+36.238505717" Mar 17 18:43:37.734410 systemd-networkd[1004]: lxcab6c7a959ec7: Link UP Mar 17 18:43:37.742712 kernel: eth0: renamed from tmp911ac Mar 17 18:43:37.746663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcab6c7a959ec7: link becomes ready Mar 17 18:43:37.745545 systemd-networkd[1004]: lxcab6c7a959ec7: Gained carrier Mar 17 18:43:37.774106 systemd-networkd[1004]: lxc99a016b63780: Link UP Mar 17 18:43:37.782314 kernel: eth0: renamed from tmp9ba3c Mar 17 18:43:37.792448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc99a016b63780: link becomes ready Mar 17 18:43:37.791277 systemd-networkd[1004]: lxc99a016b63780: Gained carrier Mar 17 18:43:38.518916 systemd-networkd[1004]: lxc_health: Gained IPv6LL Mar 17 18:43:38.902854 systemd-networkd[1004]: lxcab6c7a959ec7: Gained IPv6LL Mar 17 18:43:39.798938 systemd-networkd[1004]: lxc99a016b63780: Gained IPv6LL Mar 17 18:43:41.643465 kubelet[1974]: I0317 18:43:41.643338 1974 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:43:41.645304 kubelet[1974]: E0317 18:43:41.645225 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:41.931658 kubelet[1974]: E0317 18:43:41.931466 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:44.408358 env[1192]: time="2025-03-17T18:43:44.408217200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:44.408908 env[1192]: time="2025-03-17T18:43:44.408375129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:44.408908 env[1192]: time="2025-03-17T18:43:44.408411907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:44.408908 env[1192]: time="2025-03-17T18:43:44.408696181Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b pid=3160 runtime=io.containerd.runc.v2 Mar 17 18:43:44.420622 env[1192]: time="2025-03-17T18:43:44.420442613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:44.420622 env[1192]: time="2025-03-17T18:43:44.420515403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:44.420622 env[1192]: time="2025-03-17T18:43:44.420538875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:44.422622 env[1192]: time="2025-03-17T18:43:44.421336633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba3c06a7e1671b881c4d5827df24c8b00bbc891ea98cefd40a19d77cfbd964e pid=3168 runtime=io.containerd.runc.v2 Mar 17 18:43:44.460073 systemd[1]: run-containerd-runc-k8s.io-911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b-runc.61PwU8.mount: Deactivated successfully. Mar 17 18:43:44.474554 systemd[1]: Started cri-containerd-911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b.scope. Mar 17 18:43:44.493266 systemd[1]: Started cri-containerd-9ba3c06a7e1671b881c4d5827df24c8b00bbc891ea98cefd40a19d77cfbd964e.scope. Mar 17 18:43:44.602644 env[1192]: time="2025-03-17T18:43:44.600435571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b95g5,Uid:ea4c2c13-350f-4ee1-b261-175b2cd9fd80,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ba3c06a7e1671b881c4d5827df24c8b00bbc891ea98cefd40a19d77cfbd964e\"" Mar 17 18:43:44.605775 kubelet[1974]: E0317 18:43:44.604945 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:44.613235 env[1192]: time="2025-03-17T18:43:44.613163033Z" level=info msg="CreateContainer within sandbox \"9ba3c06a7e1671b881c4d5827df24c8b00bbc891ea98cefd40a19d77cfbd964e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:44.620495 env[1192]: time="2025-03-17T18:43:44.620430026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bcmn6,Uid:12fb3e7e-7199-4655-9433-67fd95c1d30f,Namespace:kube-system,Attempt:0,} returns sandbox id \"911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b\"" Mar 17 18:43:44.621743 kubelet[1974]: E0317 18:43:44.621704 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:44.628624 env[1192]: time="2025-03-17T18:43:44.628542455Z" level=info msg="CreateContainer within sandbox \"911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:44.656955 env[1192]: time="2025-03-17T18:43:44.656865538Z" level=info msg="CreateContainer within sandbox \"9ba3c06a7e1671b881c4d5827df24c8b00bbc891ea98cefd40a19d77cfbd964e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"268454a21a98a5c7f0889734306c63df7e0db2b6a0ccade7cee0f8333c77c967\"" Mar 17 18:43:44.662053 env[1192]: time="2025-03-17T18:43:44.661004660Z" level=info msg="StartContainer for \"268454a21a98a5c7f0889734306c63df7e0db2b6a0ccade7cee0f8333c77c967\"" Mar 17 18:43:44.673870 env[1192]: time="2025-03-17T18:43:44.673796486Z" level=info msg="CreateContainer within sandbox \"911acd8fc4c9329a2c8c69b361fef86ea43f9244dc3c2d55e32f492176fe041b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea0d7d5466e3efa0049bd96be7cd8dbd26a1818205e521bc00c184d5a8c37a1a\"" Mar 17 18:43:44.675690 env[1192]: time="2025-03-17T18:43:44.675622012Z" level=info msg="StartContainer for \"ea0d7d5466e3efa0049bd96be7cd8dbd26a1818205e521bc00c184d5a8c37a1a\"" Mar 17 18:43:44.710764 systemd[1]: Started cri-containerd-268454a21a98a5c7f0889734306c63df7e0db2b6a0ccade7cee0f8333c77c967.scope. Mar 17 18:43:44.742804 systemd[1]: Started cri-containerd-ea0d7d5466e3efa0049bd96be7cd8dbd26a1818205e521bc00c184d5a8c37a1a.scope. Mar 17 18:43:44.807902 env[1192]: time="2025-03-17T18:43:44.807826119Z" level=info msg="StartContainer for \"ea0d7d5466e3efa0049bd96be7cd8dbd26a1818205e521bc00c184d5a8c37a1a\" returns successfully" Mar 17 18:43:44.820469 env[1192]: time="2025-03-17T18:43:44.820383664Z" level=info msg="StartContainer for \"268454a21a98a5c7f0889734306c63df7e0db2b6a0ccade7cee0f8333c77c967\" returns successfully" Mar 17 18:43:44.942050 kubelet[1974]: E0317 18:43:44.941053 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:44.944972 kubelet[1974]: E0317 18:43:44.944924 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:45.015639 kubelet[1974]: I0317 18:43:45.015439 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bcmn6" podStartSLOduration=32.015411224 podStartE2EDuration="32.015411224s" podCreationTimestamp="2025-03-17 18:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:44.978586127 +0000 UTC m=+43.607022339" watchObservedRunningTime="2025-03-17 18:43:45.015411224 +0000 UTC m=+43.643847462" Mar 17 18:43:45.948160 kubelet[1974]: E0317 18:43:45.948111 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:45.948913 kubelet[1974]: E0317 18:43:45.948322 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:45.969396 kubelet[1974]: I0317 18:43:45.969312 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b95g5" podStartSLOduration=31.969283656 podStartE2EDuration="31.969283656s" podCreationTimestamp="2025-03-17 18:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:45.016590079 +0000 UTC m=+43.645026299" watchObservedRunningTime="2025-03-17 18:43:45.969283656 +0000 UTC m=+44.597719878" Mar 17 18:43:46.950152 kubelet[1974]: E0317 18:43:46.950100 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:46.951335 kubelet[1974]: E0317 18:43:46.951152 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:43:52.527036 systemd[1]: Started sshd@7-146.190.61.194:22-139.178.68.195:45276.service. Mar 17 18:43:52.597640 sshd[3327]: Accepted publickey for core from 139.178.68.195 port 45276 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:52.600665 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:52.609032 systemd-logind[1183]: New session 6 of user core. Mar 17 18:43:52.610020 systemd[1]: Started session-6.scope. Mar 17 18:43:52.880812 sshd[3327]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:52.890032 systemd[1]: sshd@7-146.190.61.194:22-139.178.68.195:45276.service: Deactivated successfully. Mar 17 18:43:52.891416 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:43:52.893207 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:43:52.894648 systemd-logind[1183]: Removed session 6. Mar 17 18:43:57.893280 systemd[1]: Started sshd@8-146.190.61.194:22-139.178.68.195:42372.service. Mar 17 18:43:57.947940 sshd[3341]: Accepted publickey for core from 139.178.68.195 port 42372 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:43:57.951016 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:57.960443 systemd-logind[1183]: New session 7 of user core. Mar 17 18:43:57.961174 systemd[1]: Started session-7.scope. Mar 17 18:43:58.143496 sshd[3341]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:58.147910 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:43:58.149763 systemd[1]: sshd@8-146.190.61.194:22-139.178.68.195:42372.service: Deactivated successfully. Mar 17 18:43:58.151224 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:43:58.153029 systemd-logind[1183]: Removed session 7. Mar 17 18:44:03.157094 systemd[1]: Started sshd@9-146.190.61.194:22-139.178.68.195:42382.service. Mar 17 18:44:03.217426 sshd[3358]: Accepted publickey for core from 139.178.68.195 port 42382 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:03.221458 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:03.232796 systemd-logind[1183]: New session 8 of user core. Mar 17 18:44:03.233702 systemd[1]: Started session-8.scope. Mar 17 18:44:03.472834 sshd[3358]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:03.487140 systemd[1]: sshd@9-146.190.61.194:22-139.178.68.195:42382.service: Deactivated successfully. Mar 17 18:44:03.488419 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:44:03.489843 systemd-logind[1183]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:44:03.491870 systemd-logind[1183]: Removed session 8. Mar 17 18:44:08.490522 systemd[1]: Started sshd@10-146.190.61.194:22-139.178.68.195:56714.service. Mar 17 18:44:08.569669 sshd[3371]: Accepted publickey for core from 139.178.68.195 port 56714 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:08.574913 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:08.585844 systemd-logind[1183]: New session 9 of user core. Mar 17 18:44:08.586262 systemd[1]: Started session-9.scope. Mar 17 18:44:08.769694 sshd[3371]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:08.777385 systemd[1]: sshd@10-146.190.61.194:22-139.178.68.195:56714.service: Deactivated successfully. Mar 17 18:44:08.779310 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:44:08.780991 systemd-logind[1183]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:44:08.783942 systemd-logind[1183]: Removed session 9. Mar 17 18:44:13.780708 systemd[1]: Started sshd@11-146.190.61.194:22-139.178.68.195:56722.service. Mar 17 18:44:13.839380 sshd[3383]: Accepted publickey for core from 139.178.68.195 port 56722 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:13.845127 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.853973 systemd-logind[1183]: New session 10 of user core. Mar 17 18:44:13.855140 systemd[1]: Started session-10.scope. Mar 17 18:44:14.048192 sshd[3383]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:14.058192 systemd[1]: sshd@11-146.190.61.194:22-139.178.68.195:56722.service: Deactivated successfully. Mar 17 18:44:14.060948 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:44:14.063447 systemd-logind[1183]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:44:14.068685 systemd[1]: Started sshd@12-146.190.61.194:22-139.178.68.195:56726.service. Mar 17 18:44:14.070769 systemd-logind[1183]: Removed session 10. Mar 17 18:44:14.127780 sshd[3396]: Accepted publickey for core from 139.178.68.195 port 56726 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:14.130783 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:14.140971 systemd[1]: Started session-11.scope. Mar 17 18:44:14.142241 systemd-logind[1183]: New session 11 of user core. Mar 17 18:44:14.424352 sshd[3396]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:14.433106 systemd[1]: Started sshd@13-146.190.61.194:22-139.178.68.195:56734.service. Mar 17 18:44:14.440267 systemd[1]: sshd@12-146.190.61.194:22-139.178.68.195:56726.service: Deactivated successfully. Mar 17 18:44:14.441523 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:44:14.446471 systemd-logind[1183]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:44:14.448948 systemd-logind[1183]: Removed session 11. Mar 17 18:44:14.503094 sshd[3407]: Accepted publickey for core from 139.178.68.195 port 56734 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:14.506148 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:14.514329 systemd[1]: Started session-12.scope. Mar 17 18:44:14.516881 systemd-logind[1183]: New session 12 of user core. Mar 17 18:44:14.693625 sshd[3407]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:14.701267 systemd[1]: sshd@13-146.190.61.194:22-139.178.68.195:56734.service: Deactivated successfully. Mar 17 18:44:14.703331 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:44:14.705553 systemd-logind[1183]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:44:14.707810 systemd-logind[1183]: Removed session 12. Mar 17 18:44:17.626885 kubelet[1974]: E0317 18:44:17.626825 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:18.626962 kubelet[1974]: E0317 18:44:18.626909 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:19.705005 systemd[1]: Started sshd@14-146.190.61.194:22-139.178.68.195:37726.service. Mar 17 18:44:19.755484 sshd[3422]: Accepted publickey for core from 139.178.68.195 port 37726 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:19.757982 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:19.767972 systemd[1]: Started session-13.scope. Mar 17 18:44:19.769447 systemd-logind[1183]: New session 13 of user core. Mar 17 18:44:19.975379 sshd[3422]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:19.980903 systemd[1]: sshd@14-146.190.61.194:22-139.178.68.195:37726.service: Deactivated successfully. Mar 17 18:44:19.982394 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:44:19.983870 systemd-logind[1183]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:44:19.984996 systemd-logind[1183]: Removed session 13. Mar 17 18:44:24.991042 systemd[1]: Started sshd@15-146.190.61.194:22-139.178.68.195:37736.service. Mar 17 18:44:25.069206 sshd[3434]: Accepted publickey for core from 139.178.68.195 port 37736 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:25.073395 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:25.083785 systemd[1]: Started session-14.scope. Mar 17 18:44:25.084749 systemd-logind[1183]: New session 14 of user core. Mar 17 18:44:25.295329 sshd[3434]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:25.300821 systemd[1]: sshd@15-146.190.61.194:22-139.178.68.195:37736.service: Deactivated successfully. Mar 17 18:44:25.302331 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:44:25.304829 systemd-logind[1183]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:44:25.307241 systemd-logind[1183]: Removed session 14. Mar 17 18:44:25.627632 kubelet[1974]: E0317 18:44:25.627544 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:30.305025 systemd[1]: Started sshd@16-146.190.61.194:22-139.178.68.195:40136.service. Mar 17 18:44:30.354868 sshd[3446]: Accepted publickey for core from 139.178.68.195 port 40136 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:30.358708 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:30.368315 systemd[1]: Started session-15.scope. Mar 17 18:44:30.369823 systemd-logind[1183]: New session 15 of user core. Mar 17 18:44:30.546632 sshd[3446]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:30.558083 systemd[1]: Started sshd@17-146.190.61.194:22-139.178.68.195:40148.service. Mar 17 18:44:30.559370 systemd[1]: sshd@16-146.190.61.194:22-139.178.68.195:40136.service: Deactivated successfully. Mar 17 18:44:30.561784 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:44:30.564193 systemd-logind[1183]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:44:30.566200 systemd-logind[1183]: Removed session 15. Mar 17 18:44:30.615032 sshd[3457]: Accepted publickey for core from 139.178.68.195 port 40148 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:30.619029 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:30.628760 systemd-logind[1183]: New session 16 of user core. Mar 17 18:44:30.628962 systemd[1]: Started session-16.scope. Mar 17 18:44:31.046497 sshd[3457]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:31.055671 systemd[1]: Started sshd@18-146.190.61.194:22-139.178.68.195:40164.service. Mar 17 18:44:31.059462 systemd-logind[1183]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:44:31.061716 systemd[1]: sshd@17-146.190.61.194:22-139.178.68.195:40148.service: Deactivated successfully. Mar 17 18:44:31.063155 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:44:31.066214 systemd-logind[1183]: Removed session 16. Mar 17 18:44:31.119338 sshd[3467]: Accepted publickey for core from 139.178.68.195 port 40164 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:31.122837 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:31.131552 systemd[1]: Started session-17.scope. Mar 17 18:44:31.132070 systemd-logind[1183]: New session 17 of user core. Mar 17 18:44:33.247327 sshd[3467]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:33.255556 systemd[1]: sshd@18-146.190.61.194:22-139.178.68.195:40164.service: Deactivated successfully. Mar 17 18:44:33.257929 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:44:33.261927 systemd-logind[1183]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:44:33.265602 systemd[1]: Started sshd@19-146.190.61.194:22-139.178.68.195:40174.service. Mar 17 18:44:33.270562 systemd-logind[1183]: Removed session 17. Mar 17 18:44:33.317001 sshd[3483]: Accepted publickey for core from 139.178.68.195 port 40174 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:33.319645 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:33.329426 systemd[1]: Started session-18.scope. Mar 17 18:44:33.330004 systemd-logind[1183]: New session 18 of user core. Mar 17 18:44:33.627083 kubelet[1974]: E0317 18:44:33.627027 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:33.755775 sshd[3483]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:33.769985 systemd[1]: Started sshd@20-146.190.61.194:22-139.178.68.195:40182.service. Mar 17 18:44:33.770873 systemd[1]: sshd@19-146.190.61.194:22-139.178.68.195:40174.service: Deactivated successfully. Mar 17 18:44:33.774985 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:44:33.779922 systemd-logind[1183]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:44:33.782370 systemd-logind[1183]: Removed session 18. Mar 17 18:44:33.824435 sshd[3493]: Accepted publickey for core from 139.178.68.195 port 40182 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:33.827029 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:33.835174 systemd[1]: Started session-19.scope. Mar 17 18:44:33.835844 systemd-logind[1183]: New session 19 of user core. Mar 17 18:44:34.017706 sshd[3493]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:34.022391 systemd[1]: sshd@20-146.190.61.194:22-139.178.68.195:40182.service: Deactivated successfully. Mar 17 18:44:34.023364 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:44:34.024282 systemd-logind[1183]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:44:34.026369 systemd-logind[1183]: Removed session 19. Mar 17 18:44:39.027817 systemd[1]: Started sshd@21-146.190.61.194:22-139.178.68.195:55602.service. Mar 17 18:44:39.080443 sshd[3507]: Accepted publickey for core from 139.178.68.195 port 55602 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:39.084721 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:39.094626 systemd[1]: Started session-20.scope. Mar 17 18:44:39.096489 systemd-logind[1183]: New session 20 of user core. Mar 17 18:44:39.264453 sshd[3507]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:39.270287 systemd[1]: sshd@21-146.190.61.194:22-139.178.68.195:55602.service: Deactivated successfully. Mar 17 18:44:39.271647 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:44:39.272910 systemd-logind[1183]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:44:39.275214 systemd-logind[1183]: Removed session 20. Mar 17 18:44:44.275735 systemd[1]: Started sshd@22-146.190.61.194:22-139.178.68.195:55608.service. Mar 17 18:44:44.321178 sshd[3522]: Accepted publickey for core from 139.178.68.195 port 55608 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:44.323980 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:44.332391 systemd-logind[1183]: New session 21 of user core. Mar 17 18:44:44.332440 systemd[1]: Started session-21.scope. Mar 17 18:44:44.501217 sshd[3522]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:44.506093 systemd[1]: sshd@22-146.190.61.194:22-139.178.68.195:55608.service: Deactivated successfully. Mar 17 18:44:44.507457 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:44:44.509755 systemd-logind[1183]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:44:44.511152 systemd-logind[1183]: Removed session 21. Mar 17 18:44:44.627029 kubelet[1974]: E0317 18:44:44.626974 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:49.515382 systemd[1]: Started sshd@23-146.190.61.194:22-139.178.68.195:35136.service. Mar 17 18:44:49.574858 sshd[3535]: Accepted publickey for core from 139.178.68.195 port 35136 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:49.578655 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:49.587022 systemd-logind[1183]: New session 22 of user core. Mar 17 18:44:49.588473 systemd[1]: Started session-22.scope. Mar 17 18:44:49.805791 sshd[3535]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:49.814407 systemd[1]: sshd@23-146.190.61.194:22-139.178.68.195:35136.service: Deactivated successfully. Mar 17 18:44:49.815857 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:44:49.817949 systemd-logind[1183]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:44:49.820311 systemd-logind[1183]: Removed session 22. Mar 17 18:44:54.627889 kubelet[1974]: E0317 18:44:54.627826 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:44:54.814225 systemd[1]: Started sshd@24-146.190.61.194:22-139.178.68.195:35144.service. Mar 17 18:44:54.866458 sshd[3547]: Accepted publickey for core from 139.178.68.195 port 35144 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:44:54.869660 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:54.883033 systemd-logind[1183]: New session 23 of user core. Mar 17 18:44:54.883914 systemd[1]: Started session-23.scope. Mar 17 18:44:55.046271 sshd[3547]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:55.051311 systemd-logind[1183]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:44:55.051703 systemd[1]: sshd@24-146.190.61.194:22-139.178.68.195:35144.service: Deactivated successfully. Mar 17 18:44:55.052774 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:44:55.054038 systemd-logind[1183]: Removed session 23. Mar 17 18:45:00.055421 systemd[1]: Started sshd@25-146.190.61.194:22-139.178.68.195:39304.service. Mar 17 18:45:00.108957 sshd[3559]: Accepted publickey for core from 139.178.68.195 port 39304 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:00.113182 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:00.122615 systemd[1]: Started session-24.scope. Mar 17 18:45:00.124181 systemd-logind[1183]: New session 24 of user core. Mar 17 18:45:00.292130 sshd[3559]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:00.298189 systemd[1]: sshd@25-146.190.61.194:22-139.178.68.195:39304.service: Deactivated successfully. Mar 17 18:45:00.303216 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:45:00.304630 systemd-logind[1183]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:45:00.306461 systemd-logind[1183]: Removed session 24. Mar 17 18:45:02.152089 update_engine[1185]: I0317 18:45:02.150875 1185 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:45:02.152089 update_engine[1185]: I0317 18:45:02.150979 1185 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:45:02.168690 update_engine[1185]: I0317 18:45:02.168354 1185 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:45:02.170838 update_engine[1185]: I0317 18:45:02.169420 1185 omaha_request_params.cc:62] Current group set to lts Mar 17 18:45:02.180240 update_engine[1185]: I0317 18:45:02.180084 1185 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:45:02.180240 update_engine[1185]: I0317 18:45:02.180121 1185 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:45:02.180240 update_engine[1185]: I0317 18:45:02.180147 1185 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:45:02.230062 update_engine[1185]: I0317 18:45:02.229800 1185 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:45:02.230062 update_engine[1185]: I0317 18:45:02.229941 1185 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:45:02.230062 update_engine[1185]: I0317 18:45:02.229952 1185 omaha_request_action.cc:271] Request: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: Mar 17 18:45:02.230062 update_engine[1185]: I0317 18:45:02.229960 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:45:02.247454 update_engine[1185]: I0317 18:45:02.247294 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:45:02.248273 update_engine[1185]: E0317 18:45:02.248070 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:45:02.248273 update_engine[1185]: I0317 18:45:02.248222 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:45:02.274619 locksmithd[1227]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:45:05.305231 systemd[1]: Started sshd@26-146.190.61.194:22-139.178.68.195:39310.service. Mar 17 18:45:05.357505 sshd[3574]: Accepted publickey for core from 139.178.68.195 port 39310 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:05.361736 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:05.372077 systemd-logind[1183]: New session 25 of user core. Mar 17 18:45:05.372897 systemd[1]: Started session-25.scope. Mar 17 18:45:05.540024 sshd[3574]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:05.547479 systemd[1]: sshd@26-146.190.61.194:22-139.178.68.195:39310.service: Deactivated successfully. Mar 17 18:45:05.549662 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:45:05.552805 systemd-logind[1183]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:45:05.555780 systemd[1]: Started sshd@27-146.190.61.194:22-139.178.68.195:39312.service. Mar 17 18:45:05.559388 systemd-logind[1183]: Removed session 25. Mar 17 18:45:05.616272 sshd[3586]: Accepted publickey for core from 139.178.68.195 port 39312 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:05.619148 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:05.627701 systemd-logind[1183]: New session 26 of user core. Mar 17 18:45:05.628624 systemd[1]: Started session-26.scope. Mar 17 18:45:07.447691 env[1192]: time="2025-03-17T18:45:07.447528577Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:45:07.465899 env[1192]: time="2025-03-17T18:45:07.465670952Z" level=info msg="StopContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" with timeout 30 (s)" Mar 17 18:45:07.466540 env[1192]: time="2025-03-17T18:45:07.466369956Z" level=info msg="Stop container \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" with signal terminated" Mar 17 18:45:07.467791 env[1192]: time="2025-03-17T18:45:07.466933157Z" level=info msg="StopContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" with timeout 2 (s)" Mar 17 18:45:07.467791 env[1192]: time="2025-03-17T18:45:07.467564403Z" level=info msg="Stop container \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" with signal terminated" Mar 17 18:45:07.481288 systemd-networkd[1004]: lxc_health: Link DOWN Mar 17 18:45:07.481301 systemd-networkd[1004]: lxc_health: Lost carrier Mar 17 18:45:07.505191 systemd[1]: cri-containerd-ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff.scope: Deactivated successfully. Mar 17 18:45:07.524516 systemd[1]: cri-containerd-dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884.scope: Deactivated successfully. Mar 17 18:45:07.524961 systemd[1]: cri-containerd-dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884.scope: Consumed 11.255s CPU time. Mar 17 18:45:07.551151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff-rootfs.mount: Deactivated successfully. Mar 17 18:45:07.559239 env[1192]: time="2025-03-17T18:45:07.559173861Z" level=info msg="shim disconnected" id=ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff Mar 17 18:45:07.559239 env[1192]: time="2025-03-17T18:45:07.559225496Z" level=warning msg="cleaning up after shim disconnected" id=ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff namespace=k8s.io Mar 17 18:45:07.559239 env[1192]: time="2025-03-17T18:45:07.559235380Z" level=info msg="cleaning up dead shim" Mar 17 18:45:07.567489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884-rootfs.mount: Deactivated successfully. Mar 17 18:45:07.581514 env[1192]: time="2025-03-17T18:45:07.581437181Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3659 runtime=io.containerd.runc.v2\n" Mar 17 18:45:07.623903 env[1192]: time="2025-03-17T18:45:07.623834339Z" level=info msg="StopContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" returns successfully" Mar 17 18:45:07.632793 env[1192]: time="2025-03-17T18:45:07.632742975Z" level=info msg="StopPodSandbox for \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\"" Mar 17 18:45:07.633135 env[1192]: time="2025-03-17T18:45:07.633096740Z" level=info msg="Container to stop \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.637298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0-shm.mount: Deactivated successfully. Mar 17 18:45:07.642190 env[1192]: time="2025-03-17T18:45:07.642088100Z" level=info msg="shim disconnected" id=dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884 Mar 17 18:45:07.642190 env[1192]: time="2025-03-17T18:45:07.642179906Z" level=warning msg="cleaning up after shim disconnected" id=dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884 namespace=k8s.io Mar 17 18:45:07.642190 env[1192]: time="2025-03-17T18:45:07.642198361Z" level=info msg="cleaning up dead shim" Mar 17 18:45:07.659093 systemd[1]: cri-containerd-a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0.scope: Deactivated successfully. Mar 17 18:45:07.678468 env[1192]: time="2025-03-17T18:45:07.678385643Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n" Mar 17 18:45:07.681853 env[1192]: time="2025-03-17T18:45:07.681768057Z" level=info msg="StopContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" returns successfully" Mar 17 18:45:07.683238 env[1192]: time="2025-03-17T18:45:07.683168877Z" level=info msg="StopPodSandbox for \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\"" Mar 17 18:45:07.683559 env[1192]: time="2025-03-17T18:45:07.683291622Z" level=info msg="Container to stop \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.683700 env[1192]: time="2025-03-17T18:45:07.683651938Z" level=info msg="Container to stop \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.683776 env[1192]: time="2025-03-17T18:45:07.683700147Z" level=info msg="Container to stop \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.683776 env[1192]: time="2025-03-17T18:45:07.683734248Z" level=info msg="Container to stop \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.683776 env[1192]: time="2025-03-17T18:45:07.683763333Z" level=info msg="Container to stop \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:07.688998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f-shm.mount: Deactivated successfully. Mar 17 18:45:07.707474 systemd[1]: cri-containerd-66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f.scope: Deactivated successfully. Mar 17 18:45:07.728459 env[1192]: time="2025-03-17T18:45:07.728370950Z" level=info msg="shim disconnected" id=a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0 Mar 17 18:45:07.728459 env[1192]: time="2025-03-17T18:45:07.728456225Z" level=warning msg="cleaning up after shim disconnected" id=a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0 namespace=k8s.io Mar 17 18:45:07.728966 env[1192]: time="2025-03-17T18:45:07.728476236Z" level=info msg="cleaning up dead shim" Mar 17 18:45:07.761888 env[1192]: time="2025-03-17T18:45:07.761826883Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3713 runtime=io.containerd.runc.v2\n" Mar 17 18:45:07.762877 env[1192]: time="2025-03-17T18:45:07.762776284Z" level=info msg="shim disconnected" id=66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f Mar 17 18:45:07.762877 env[1192]: time="2025-03-17T18:45:07.762846298Z" level=warning msg="cleaning up after shim disconnected" id=66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f namespace=k8s.io Mar 17 18:45:07.762877 env[1192]: time="2025-03-17T18:45:07.762860597Z" level=info msg="cleaning up dead shim" Mar 17 18:45:07.763438 env[1192]: time="2025-03-17T18:45:07.763388564Z" level=info msg="TearDown network for sandbox \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\" successfully" Mar 17 18:45:07.764113 env[1192]: time="2025-03-17T18:45:07.764058467Z" level=info msg="StopPodSandbox for \"a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0\" returns successfully" Mar 17 18:45:07.787824 env[1192]: time="2025-03-17T18:45:07.787745527Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Mar 17 18:45:07.788758 env[1192]: time="2025-03-17T18:45:07.788683357Z" level=info msg="TearDown network for sandbox \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" successfully" Mar 17 18:45:07.789270 env[1192]: time="2025-03-17T18:45:07.788946659Z" level=info msg="StopPodSandbox for \"66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f\" returns successfully" Mar 17 18:45:07.798862 kubelet[1974]: I0317 18:45:07.798806 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzmm\" (UniqueName: \"kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm\") pod \"6a8e89d9-173c-4b92-b380-8c24b2558912\" (UID: \"6a8e89d9-173c-4b92-b380-8c24b2558912\") " Mar 17 18:45:07.802598 kubelet[1974]: I0317 18:45:07.798899 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path\") pod \"6a8e89d9-173c-4b92-b380-8c24b2558912\" (UID: \"6a8e89d9-173c-4b92-b380-8c24b2558912\") " Mar 17 18:45:07.820014 kubelet[1974]: I0317 18:45:07.816383 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a8e89d9-173c-4b92-b380-8c24b2558912" (UID: "6a8e89d9-173c-4b92-b380-8c24b2558912"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:45:07.830744 kubelet[1974]: I0317 18:45:07.830618 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm" (OuterVolumeSpecName: "kube-api-access-zzzmm") pod "6a8e89d9-173c-4b92-b380-8c24b2558912" (UID: "6a8e89d9-173c-4b92-b380-8c24b2558912"). InnerVolumeSpecName "kube-api-access-zzzmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:07.899721 kubelet[1974]: I0317 18:45:07.899609 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hostproc\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.899721 kubelet[1974]: I0317 18:45:07.899716 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hubble-tls\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899747 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-lib-modules\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899776 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-kernel\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899815 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-net\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899840 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-etc-cni-netd\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899867 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900053 kubelet[1974]: I0317 18:45:07.899893 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prgxb\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.899916 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-run\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.899937 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-cgroup\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.899960 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-bpf-maps\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.899987 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.900025 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-xtables-lock\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.900340 kubelet[1974]: I0317 18:45:07.900050 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cni-path\") pod \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\" (UID: \"bfe69589-6d6b-4f2a-aca3-a095a04dbfcb\") " Mar 17 18:45:07.902464 kubelet[1974]: I0317 18:45:07.902410 1974 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zzzmm\" (UniqueName: \"kubernetes.io/projected/6a8e89d9-173c-4b92-b380-8c24b2558912-kube-api-access-zzzmm\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:07.902464 kubelet[1974]: I0317 18:45:07.902469 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a8e89d9-173c-4b92-b380-8c24b2558912-cilium-config-path\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:07.902737 kubelet[1974]: I0317 18:45:07.902520 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cni-path" (OuterVolumeSpecName: "cni-path") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.902921 kubelet[1974]: I0317 18:45:07.902885 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hostproc" (OuterVolumeSpecName: "hostproc") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.903840 kubelet[1974]: I0317 18:45:07.903790 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.904065 kubelet[1974]: I0317 18:45:07.904048 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.904191 kubelet[1974]: I0317 18:45:07.904176 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.907426 kubelet[1974]: I0317 18:45:07.907335 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:45:07.907809 kubelet[1974]: I0317 18:45:07.907776 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.908012 kubelet[1974]: I0317 18:45:07.907978 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.908166 kubelet[1974]: I0317 18:45:07.908147 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.908327 kubelet[1974]: I0317 18:45:07.908306 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.908463 kubelet[1974]: I0317 18:45:07.908444 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:07.909735 kubelet[1974]: I0317 18:45:07.909657 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb" (OuterVolumeSpecName: "kube-api-access-prgxb") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "kube-api-access-prgxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:07.914224 kubelet[1974]: I0317 18:45:07.914137 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:45:07.917772 kubelet[1974]: I0317 18:45:07.917703 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" (UID: "bfe69589-6d6b-4f2a-aca3-a095a04dbfcb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:08.005657 kubelet[1974]: I0317 18:45:08.003762 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-config-path\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.005980 kubelet[1974]: I0317 18:45:08.005947 1974 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-bpf-maps\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006082 kubelet[1974]: I0317 18:45:08.006066 1974 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-xtables-lock\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006178 kubelet[1974]: I0317 18:45:08.006164 1974 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cni-path\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006380 kubelet[1974]: I0317 18:45:08.006363 1974 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hostproc\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006477 kubelet[1974]: I0317 18:45:08.006463 1974 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-hubble-tls\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006621 kubelet[1974]: I0317 18:45:08.006546 1974 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-lib-modules\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.006761 kubelet[1974]: I0317 18:45:08.006743 1974 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-kernel\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006868 1974 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-host-proc-sys-net\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006887 1974 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-prgxb\" (UniqueName: \"kubernetes.io/projected/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-kube-api-access-prgxb\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006900 1974 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-etc-cni-netd\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006912 1974 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-clustermesh-secrets\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006926 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-run\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.007229 kubelet[1974]: I0317 18:45:08.006938 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb-cilium-cgroup\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:08.282546 kubelet[1974]: I0317 18:45:08.281817 1974 scope.go:117] "RemoveContainer" containerID="ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff" Mar 17 18:45:08.288175 env[1192]: time="2025-03-17T18:45:08.287699780Z" level=info msg="RemoveContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\"" Mar 17 18:45:08.288371 systemd[1]: Removed slice kubepods-besteffort-pod6a8e89d9_173c_4b92_b380_8c24b2558912.slice. Mar 17 18:45:08.294549 env[1192]: time="2025-03-17T18:45:08.294497748Z" level=info msg="RemoveContainer for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" returns successfully" Mar 17 18:45:08.300360 systemd[1]: Removed slice kubepods-burstable-podbfe69589_6d6b_4f2a_aca3_a095a04dbfcb.slice. Mar 17 18:45:08.300554 systemd[1]: kubepods-burstable-podbfe69589_6d6b_4f2a_aca3_a095a04dbfcb.slice: Consumed 11.427s CPU time. Mar 17 18:45:08.306371 kubelet[1974]: I0317 18:45:08.306335 1974 scope.go:117] "RemoveContainer" containerID="ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff" Mar 17 18:45:08.308026 env[1192]: time="2025-03-17T18:45:08.307678860Z" level=error msg="ContainerStatus for \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\": not found" Mar 17 18:45:08.311540 kubelet[1974]: E0317 18:45:08.311450 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\": not found" containerID="ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff" Mar 17 18:45:08.313675 kubelet[1974]: I0317 18:45:08.313468 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff"} err="failed to get container status \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"ace8917664c2c327f835ef0873a79641bdc533824915609269e6dd0cc74d80ff\": not found" Mar 17 18:45:08.314354 kubelet[1974]: I0317 18:45:08.314330 1974 scope.go:117] "RemoveContainer" containerID="dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884" Mar 17 18:45:08.327278 env[1192]: time="2025-03-17T18:45:08.326740272Z" level=info msg="RemoveContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\"" Mar 17 18:45:08.330150 env[1192]: time="2025-03-17T18:45:08.330081576Z" level=info msg="RemoveContainer for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" returns successfully" Mar 17 18:45:08.332725 kubelet[1974]: I0317 18:45:08.332673 1974 scope.go:117] "RemoveContainer" containerID="343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d" Mar 17 18:45:08.334889 env[1192]: time="2025-03-17T18:45:08.334840269Z" level=info msg="RemoveContainer for \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\"" Mar 17 18:45:08.340561 env[1192]: time="2025-03-17T18:45:08.340492319Z" level=info msg="RemoveContainer for \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\" returns successfully" Mar 17 18:45:08.341478 kubelet[1974]: I0317 18:45:08.341434 1974 scope.go:117] "RemoveContainer" containerID="283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7" Mar 17 18:45:08.347743 env[1192]: time="2025-03-17T18:45:08.347675296Z" level=info msg="RemoveContainer for \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\"" Mar 17 18:45:08.354239 env[1192]: time="2025-03-17T18:45:08.354174904Z" level=info msg="RemoveContainer for \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\" returns successfully" Mar 17 18:45:08.354815 kubelet[1974]: I0317 18:45:08.354786 1974 scope.go:117] "RemoveContainer" containerID="874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a" Mar 17 18:45:08.358751 env[1192]: time="2025-03-17T18:45:08.358694996Z" level=info msg="RemoveContainer for \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\"" Mar 17 18:45:08.363714 env[1192]: time="2025-03-17T18:45:08.362661947Z" level=info msg="RemoveContainer for \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\" returns successfully" Mar 17 18:45:08.365829 kubelet[1974]: I0317 18:45:08.365782 1974 scope.go:117] "RemoveContainer" containerID="827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993" Mar 17 18:45:08.370936 env[1192]: time="2025-03-17T18:45:08.370884760Z" level=info msg="RemoveContainer for \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\"" Mar 17 18:45:08.376477 env[1192]: time="2025-03-17T18:45:08.376391158Z" level=info msg="RemoveContainer for \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\" returns successfully" Mar 17 18:45:08.377396 kubelet[1974]: I0317 18:45:08.377342 1974 scope.go:117] "RemoveContainer" containerID="dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884" Mar 17 18:45:08.378266 env[1192]: time="2025-03-17T18:45:08.378165611Z" level=error msg="ContainerStatus for \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\": not found" Mar 17 18:45:08.378847 kubelet[1974]: E0317 18:45:08.378802 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\": not found" containerID="dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884" Mar 17 18:45:08.379133 kubelet[1974]: I0317 18:45:08.379069 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884"} err="failed to get container status \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcfb68811b97f91aee1057f1f8d76f5010ff1ed25f6b6a8eaf25c47306abc884\": not found" Mar 17 18:45:08.379305 kubelet[1974]: I0317 18:45:08.379286 1974 scope.go:117] "RemoveContainer" containerID="343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d" Mar 17 18:45:08.379945 env[1192]: time="2025-03-17T18:45:08.379847944Z" level=error msg="ContainerStatus for \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\": not found" Mar 17 18:45:08.380425 kubelet[1974]: E0317 18:45:08.380396 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\": not found" containerID="343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d" Mar 17 18:45:08.380661 kubelet[1974]: I0317 18:45:08.380616 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d"} err="failed to get container status \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\": rpc error: code = NotFound desc = an error occurred when try to find container \"343d93f803e4741acbf90b8cd33144a532e40702c84f0a4932978a811f91e85d\": not found" Mar 17 18:45:08.380818 kubelet[1974]: I0317 18:45:08.380799 1974 scope.go:117] "RemoveContainer" containerID="283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7" Mar 17 18:45:08.381548 env[1192]: time="2025-03-17T18:45:08.381418159Z" level=error msg="ContainerStatus for \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\": not found" Mar 17 18:45:08.381945 kubelet[1974]: E0317 18:45:08.381901 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\": not found" containerID="283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7" Mar 17 18:45:08.382185 kubelet[1974]: I0317 18:45:08.382133 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7"} err="failed to get container status \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"283be87f7ba5ba81385d8da979789d0d3eecd7a6e9ee6c737c9536307c7c24e7\": not found" Mar 17 18:45:08.382364 kubelet[1974]: I0317 18:45:08.382336 1974 scope.go:117] "RemoveContainer" containerID="874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a" Mar 17 18:45:08.383088 env[1192]: time="2025-03-17T18:45:08.382984006Z" level=error msg="ContainerStatus for \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\": not found" Mar 17 18:45:08.383799 kubelet[1974]: E0317 18:45:08.383758 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\": not found" containerID="874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a" Mar 17 18:45:08.384117 kubelet[1974]: I0317 18:45:08.384051 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a"} err="failed to get container status \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\": rpc error: code = NotFound desc = an error occurred when try to find container \"874705925e5ba6a7eca2bf3a20d3de7e36cbf19253d4527ff3d89c0f538a225a\": not found" Mar 17 18:45:08.384324 kubelet[1974]: I0317 18:45:08.384290 1974 scope.go:117] "RemoveContainer" containerID="827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993" Mar 17 18:45:08.384989 env[1192]: time="2025-03-17T18:45:08.384896486Z" level=error msg="ContainerStatus for \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\": not found" Mar 17 18:45:08.385463 kubelet[1974]: E0317 18:45:08.385413 1974 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\": not found" containerID="827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993" Mar 17 18:45:08.385831 kubelet[1974]: I0317 18:45:08.385771 1974 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993"} err="failed to get container status \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\": rpc error: code = NotFound desc = an error occurred when try to find container \"827638c0c1c5e4237d5bc31056a49eac47e8fdab824bcaccfc2f6a91c908f993\": not found" Mar 17 18:45:08.404532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8b1b0ab7178c26ddd790d3ad54314ee841bb90e25e4cdf619ccd40bf27ab3a0-rootfs.mount: Deactivated successfully. Mar 17 18:45:08.404704 systemd[1]: var-lib-kubelet-pods-6a8e89d9\x2d173c\x2d4b92\x2db380\x2d8c24b2558912-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzzzmm.mount: Deactivated successfully. Mar 17 18:45:08.404781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66e77c88e8fc0982c389e351a3cc7ea7ea570e94874115874122535946e45e3f-rootfs.mount: Deactivated successfully. Mar 17 18:45:08.404860 systemd[1]: var-lib-kubelet-pods-bfe69589\x2d6d6b\x2d4f2a\x2daca3\x2da095a04dbfcb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:45:08.404932 systemd[1]: var-lib-kubelet-pods-bfe69589\x2d6d6b\x2d4f2a\x2daca3\x2da095a04dbfcb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dprgxb.mount: Deactivated successfully. Mar 17 18:45:08.405005 systemd[1]: var-lib-kubelet-pods-bfe69589\x2d6d6b\x2d4f2a\x2daca3\x2da095a04dbfcb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:45:09.297339 sshd[3586]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:09.305776 systemd[1]: Started sshd@28-146.190.61.194:22-139.178.68.195:35410.service. Mar 17 18:45:09.309451 systemd[1]: sshd@27-146.190.61.194:22-139.178.68.195:39312.service: Deactivated successfully. Mar 17 18:45:09.311721 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:45:09.316427 systemd-logind[1183]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:45:09.318321 systemd-logind[1183]: Removed session 26. Mar 17 18:45:09.365434 sshd[3757]: Accepted publickey for core from 139.178.68.195 port 35410 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:09.367970 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:09.375620 systemd-logind[1183]: New session 27 of user core. Mar 17 18:45:09.376237 systemd[1]: Started session-27.scope. Mar 17 18:45:09.629595 kubelet[1974]: I0317 18:45:09.629518 1974 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8e89d9-173c-4b92-b380-8c24b2558912" path="/var/lib/kubelet/pods/6a8e89d9-173c-4b92-b380-8c24b2558912/volumes" Mar 17 18:45:09.630356 kubelet[1974]: I0317 18:45:09.630320 1974 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" path="/var/lib/kubelet/pods/bfe69589-6d6b-4f2a-aca3-a095a04dbfcb/volumes" Mar 17 18:45:10.116990 sshd[3757]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:10.125085 systemd[1]: sshd@28-146.190.61.194:22-139.178.68.195:35410.service: Deactivated successfully. Mar 17 18:45:10.127035 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:45:10.128798 systemd-logind[1183]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:45:10.131228 systemd[1]: Started sshd@29-146.190.61.194:22-139.178.68.195:35426.service. Mar 17 18:45:10.140229 systemd-logind[1183]: Removed session 27. Mar 17 18:45:10.169235 kubelet[1974]: I0317 18:45:10.168052 1974 topology_manager.go:215] "Topology Admit Handler" podUID="4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" podNamespace="kube-system" podName="cilium-s65xz" Mar 17 18:45:10.171070 kubelet[1974]: E0317 18:45:10.171014 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="clean-cilium-state" Mar 17 18:45:10.171070 kubelet[1974]: E0317 18:45:10.171080 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6a8e89d9-173c-4b92-b380-8c24b2558912" containerName="cilium-operator" Mar 17 18:45:10.171342 kubelet[1974]: E0317 18:45:10.171093 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="cilium-agent" Mar 17 18:45:10.171342 kubelet[1974]: E0317 18:45:10.171104 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="apply-sysctl-overwrites" Mar 17 18:45:10.171342 kubelet[1974]: E0317 18:45:10.171113 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="mount-bpf-fs" Mar 17 18:45:10.171342 kubelet[1974]: E0317 18:45:10.171124 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="mount-cgroup" Mar 17 18:45:10.171342 kubelet[1974]: I0317 18:45:10.171201 1974 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8e89d9-173c-4b92-b380-8c24b2558912" containerName="cilium-operator" Mar 17 18:45:10.171342 kubelet[1974]: I0317 18:45:10.171212 1974 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe69589-6d6b-4f2a-aca3-a095a04dbfcb" containerName="cilium-agent" Mar 17 18:45:10.207757 sshd[3768]: Accepted publickey for core from 139.178.68.195 port 35426 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:10.214035 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:10.225151 systemd[1]: Created slice kubepods-burstable-pod4d5d9eae_3225_4b2b_bc53_2bf88bd25b57.slice. Mar 17 18:45:10.230080 systemd[1]: Started session-28.scope. Mar 17 18:45:10.231128 systemd-logind[1183]: New session 28 of user core. Mar 17 18:45:10.238294 kubelet[1974]: I0317 18:45:10.238241 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-xtables-lock\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.238702 kubelet[1974]: I0317 18:45:10.238668 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-bpf-maps\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.238827 kubelet[1974]: I0317 18:45:10.238810 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-config-path\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.239021 kubelet[1974]: I0317 18:45:10.238996 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsv52\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-kube-api-access-qsv52\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.239150 kubelet[1974]: I0317 18:45:10.239133 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-etc-cni-netd\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.239249 kubelet[1974]: I0317 18:45:10.239235 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-run\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.244647 kubelet[1974]: I0317 18:45:10.239333 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-clustermesh-secrets\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.245105 kubelet[1974]: I0317 18:45:10.245072 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-ipsec-secrets\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.245295 kubelet[1974]: I0317 18:45:10.245270 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-cgroup\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.245489 kubelet[1974]: I0317 18:45:10.245468 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-lib-modules\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.245731 kubelet[1974]: I0317 18:45:10.245624 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-net\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.245906 kubelet[1974]: I0317 18:45:10.245862 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-kernel\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.246015 kubelet[1974]: I0317 18:45:10.245995 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hostproc\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.246190 kubelet[1974]: I0317 18:45:10.246170 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cni-path\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.246294 kubelet[1974]: I0317 18:45:10.246277 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hubble-tls\") pod \"cilium-s65xz\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " pod="kube-system/cilium-s65xz" Mar 17 18:45:10.491328 sshd[3768]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:10.502454 systemd[1]: Started sshd@30-146.190.61.194:22-139.178.68.195:35440.service. Mar 17 18:45:10.509539 systemd[1]: sshd@29-146.190.61.194:22-139.178.68.195:35426.service: Deactivated successfully. Mar 17 18:45:10.511030 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:45:10.513007 systemd-logind[1183]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:45:10.514875 systemd-logind[1183]: Removed session 28. Mar 17 18:45:10.521766 kubelet[1974]: E0317 18:45:10.521708 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:10.526172 env[1192]: time="2025-03-17T18:45:10.525972042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s65xz,Uid:4d5d9eae-3225-4b2b-bc53-2bf88bd25b57,Namespace:kube-system,Attempt:0,}" Mar 17 18:45:10.562803 env[1192]: time="2025-03-17T18:45:10.562676396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:45:10.562993 env[1192]: time="2025-03-17T18:45:10.562810025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:45:10.562993 env[1192]: time="2025-03-17T18:45:10.562844859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:45:10.563144 env[1192]: time="2025-03-17T18:45:10.563073556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1 pid=3792 runtime=io.containerd.runc.v2 Mar 17 18:45:10.585698 sshd[3783]: Accepted publickey for core from 139.178.68.195 port 35440 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:45:10.588155 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:45:10.611411 systemd[1]: Started cri-containerd-06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1.scope. Mar 17 18:45:10.626876 systemd[1]: Started session-29.scope. Mar 17 18:45:10.627710 systemd-logind[1183]: New session 29 of user core. Mar 17 18:45:10.669298 env[1192]: time="2025-03-17T18:45:10.669225303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s65xz,Uid:4d5d9eae-3225-4b2b-bc53-2bf88bd25b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\"" Mar 17 18:45:10.670422 kubelet[1974]: E0317 18:45:10.670364 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:10.675358 env[1192]: time="2025-03-17T18:45:10.675271522Z" level=info msg="CreateContainer within sandbox \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:45:10.686654 env[1192]: time="2025-03-17T18:45:10.686533627Z" level=info msg="CreateContainer within sandbox \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\"" Mar 17 18:45:10.689456 env[1192]: time="2025-03-17T18:45:10.687905857Z" level=info msg="StartContainer for \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\"" Mar 17 18:45:10.731900 systemd[1]: Started cri-containerd-d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685.scope. Mar 17 18:45:10.758182 systemd[1]: cri-containerd-d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685.scope: Deactivated successfully. Mar 17 18:45:10.782174 env[1192]: time="2025-03-17T18:45:10.778693591Z" level=info msg="shim disconnected" id=d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685 Mar 17 18:45:10.782174 env[1192]: time="2025-03-17T18:45:10.778757154Z" level=warning msg="cleaning up after shim disconnected" id=d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685 namespace=k8s.io Mar 17 18:45:10.782174 env[1192]: time="2025-03-17T18:45:10.778769957Z" level=info msg="cleaning up dead shim" Mar 17 18:45:10.796625 env[1192]: time="2025-03-17T18:45:10.793915020Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:45:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:45:10.796625 env[1192]: time="2025-03-17T18:45:10.794321679Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 17 18:45:10.798612 env[1192]: time="2025-03-17T18:45:10.798355823Z" level=error msg="Failed to pipe stdout of container \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\"" error="reading from a closed fifo" Mar 17 18:45:10.798612 env[1192]: time="2025-03-17T18:45:10.798453517Z" level=error msg="Failed to pipe stderr of container \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\"" error="reading from a closed fifo" Mar 17 18:45:10.801912 env[1192]: time="2025-03-17T18:45:10.801245397Z" level=error msg="StartContainer for \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:45:10.803600 kubelet[1974]: E0317 18:45:10.801614 1974 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685" Mar 17 18:45:10.809901 kubelet[1974]: E0317 18:45:10.809810 1974 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:45:10.809901 kubelet[1974]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:45:10.809901 kubelet[1974]: rm /hostbin/cilium-mount Mar 17 18:45:10.810210 kubelet[1974]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsv52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-s65xz_kube-system(4d5d9eae-3225-4b2b-bc53-2bf88bd25b57): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:45:10.810210 kubelet[1974]: E0317 18:45:10.809920 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s65xz" podUID="4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" Mar 17 18:45:11.305484 env[1192]: time="2025-03-17T18:45:11.305276645Z" level=info msg="StopPodSandbox for \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\"" Mar 17 18:45:11.305484 env[1192]: time="2025-03-17T18:45:11.305357261Z" level=info msg="Container to stop \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:11.316847 systemd[1]: cri-containerd-06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1.scope: Deactivated successfully. Mar 17 18:45:11.363217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1-shm.mount: Deactivated successfully. Mar 17 18:45:11.367811 env[1192]: time="2025-03-17T18:45:11.367754113Z" level=info msg="shim disconnected" id=06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1 Mar 17 18:45:11.368240 env[1192]: time="2025-03-17T18:45:11.368203931Z" level=warning msg="cleaning up after shim disconnected" id=06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1 namespace=k8s.io Mar 17 18:45:11.368425 env[1192]: time="2025-03-17T18:45:11.368396950Z" level=info msg="cleaning up dead shim" Mar 17 18:45:11.381698 env[1192]: time="2025-03-17T18:45:11.381616027Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" Mar 17 18:45:11.382491 env[1192]: time="2025-03-17T18:45:11.382426259Z" level=info msg="TearDown network for sandbox \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\" successfully" Mar 17 18:45:11.382816 env[1192]: time="2025-03-17T18:45:11.382774682Z" level=info msg="StopPodSandbox for \"06727b694d9e7aebe29da50a30325ab5ee9f7b22a3ea67dd9fcca74f34ceafb1\" returns successfully" Mar 17 18:45:11.459782 kubelet[1974]: I0317 18:45:11.459464 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-bpf-maps\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460177 kubelet[1974]: I0317 18:45:11.460152 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsv52\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-kube-api-access-qsv52\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460306 kubelet[1974]: I0317 18:45:11.460291 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-xtables-lock\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460408 kubelet[1974]: I0317 18:45:11.460393 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-etc-cni-netd\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460510 kubelet[1974]: I0317 18:45:11.460497 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-lib-modules\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460660 kubelet[1974]: I0317 18:45:11.460640 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hubble-tls\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460830 kubelet[1974]: I0317 18:45:11.460808 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-kernel\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.460974 kubelet[1974]: I0317 18:45:11.460957 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-ipsec-secrets\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461079 kubelet[1974]: I0317 18:45:11.461065 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-net\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461184 kubelet[1974]: I0317 18:45:11.461171 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hostproc\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461310 kubelet[1974]: I0317 18:45:11.461271 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cni-path\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461466 kubelet[1974]: I0317 18:45:11.461435 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-run\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461704 kubelet[1974]: I0317 18:45:11.461590 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-clustermesh-secrets\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.461915 kubelet[1974]: I0317 18:45:11.461870 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-config-path\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.462078 kubelet[1974]: I0317 18:45:11.462057 1974 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-cgroup\") pod \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\" (UID: \"4d5d9eae-3225-4b2b-bc53-2bf88bd25b57\") " Mar 17 18:45:11.462270 kubelet[1974]: I0317 18:45:11.459893 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.462395 kubelet[1974]: I0317 18:45:11.462244 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.462545 kubelet[1974]: I0317 18:45:11.462521 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.462698 kubelet[1974]: I0317 18:45:11.462680 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.462856 kubelet[1974]: I0317 18:45:11.462841 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.464302 kubelet[1974]: I0317 18:45:11.464247 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.464302 kubelet[1974]: I0317 18:45:11.464305 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.464647 kubelet[1974]: I0317 18:45:11.464623 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.464770 kubelet[1974]: I0317 18:45:11.464651 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.471218 systemd[1]: var-lib-kubelet-pods-4d5d9eae\x2d3225\x2d4b2b\x2dbc53\x2d2bf88bd25b57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsv52.mount: Deactivated successfully. Mar 17 18:45:11.478202 systemd[1]: var-lib-kubelet-pods-4d5d9eae\x2d3225\x2d4b2b\x2dbc53\x2d2bf88bd25b57-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:45:11.478373 systemd[1]: var-lib-kubelet-pods-4d5d9eae\x2d3225\x2d4b2b\x2dbc53\x2d2bf88bd25b57-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:45:11.487525 systemd[1]: var-lib-kubelet-pods-4d5d9eae\x2d3225\x2d4b2b\x2dbc53\x2d2bf88bd25b57-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:45:11.490810 kubelet[1974]: I0317 18:45:11.490757 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:11.491266 kubelet[1974]: I0317 18:45:11.491223 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:11.491757 kubelet[1974]: I0317 18:45:11.491710 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:45:11.491978 kubelet[1974]: I0317 18:45:11.491957 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-kube-api-access-qsv52" (OuterVolumeSpecName: "kube-api-access-qsv52") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "kube-api-access-qsv52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:11.492386 kubelet[1974]: I0317 18:45:11.492357 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:45:11.493892 kubelet[1974]: I0317 18:45:11.493822 1974 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" (UID: "4d5d9eae-3225-4b2b-bc53-2bf88bd25b57"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562778 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-run\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562833 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-ipsec-secrets\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562847 1974 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-net\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562860 1974 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hostproc\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562872 1974 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cni-path\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562882 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-config-path\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562892 1974 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-clustermesh-secrets\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562901 1974 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-cilium-cgroup\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562910 1974 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-bpf-maps\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562919 1974 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qsv52\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-kube-api-access-qsv52\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562930 1974 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-hubble-tls\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562938 1974 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-xtables-lock\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562946 1974 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-etc-cni-netd\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.562984 kubelet[1974]: I0317 18:45:11.562958 1974 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-lib-modules\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.565030 kubelet[1974]: I0317 18:45:11.564975 1974 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57-host-proc-sys-kernel\") on node \"ci-3510.3.7-8-addee6c60b\" DevicePath \"\"" Mar 17 18:45:11.634170 systemd[1]: Removed slice kubepods-burstable-pod4d5d9eae_3225_4b2b_bc53_2bf88bd25b57.slice. Mar 17 18:45:11.798147 kubelet[1974]: E0317 18:45:11.798053 1974 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:45:12.131857 update_engine[1185]: I0317 18:45:12.130823 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:45:12.131857 update_engine[1185]: I0317 18:45:12.131222 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:45:12.131857 update_engine[1185]: E0317 18:45:12.131395 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:45:12.131857 update_engine[1185]: I0317 18:45:12.131493 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:45:12.308907 kubelet[1974]: I0317 18:45:12.308860 1974 scope.go:117] "RemoveContainer" containerID="d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685" Mar 17 18:45:12.314015 env[1192]: time="2025-03-17T18:45:12.313615329Z" level=info msg="RemoveContainer for \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\"" Mar 17 18:45:12.317625 env[1192]: time="2025-03-17T18:45:12.317524874Z" level=info msg="RemoveContainer for \"d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685\" returns successfully" Mar 17 18:45:12.388961 kubelet[1974]: I0317 18:45:12.388814 1974 topology_manager.go:215] "Topology Admit Handler" podUID="7450fab6-ff7c-450c-87a0-2bd33c81ff3a" podNamespace="kube-system" podName="cilium-4mqmm" Mar 17 18:45:12.389370 kubelet[1974]: E0317 18:45:12.389348 1974 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" containerName="mount-cgroup" Mar 17 18:45:12.389500 kubelet[1974]: I0317 18:45:12.389485 1974 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" containerName="mount-cgroup" Mar 17 18:45:12.399894 systemd[1]: Created slice kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice. Mar 17 18:45:12.472477 kubelet[1974]: I0317 18:45:12.472335 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-bpf-maps\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472477 kubelet[1974]: I0317 18:45:12.472435 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-hostproc\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472477 kubelet[1974]: I0317 18:45:12.472465 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-cilium-ipsec-secrets\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472477 kubelet[1974]: I0317 18:45:12.472491 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-host-proc-sys-net\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472527 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s8nh\" (UniqueName: \"kubernetes.io/projected/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-kube-api-access-9s8nh\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472553 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-cilium-cgroup\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472618 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-cni-path\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472637 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-xtables-lock\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472655 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-cilium-config-path\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472688 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-etc-cni-netd\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472709 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-host-proc-sys-kernel\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472733 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-hubble-tls\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472771 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-cilium-run\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472821 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-lib-modules\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.472864 kubelet[1974]: I0317 18:45:12.472852 1974 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7450fab6-ff7c-450c-87a0-2bd33c81ff3a-clustermesh-secrets\") pod \"cilium-4mqmm\" (UID: \"7450fab6-ff7c-450c-87a0-2bd33c81ff3a\") " pod="kube-system/cilium-4mqmm" Mar 17 18:45:12.704140 kubelet[1974]: E0317 18:45:12.703984 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:12.705895 env[1192]: time="2025-03-17T18:45:12.705069799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mqmm,Uid:7450fab6-ff7c-450c-87a0-2bd33c81ff3a,Namespace:kube-system,Attempt:0,}" Mar 17 18:45:12.758349 env[1192]: time="2025-03-17T18:45:12.758232774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:45:12.758349 env[1192]: time="2025-03-17T18:45:12.758287451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:45:12.758743 env[1192]: time="2025-03-17T18:45:12.758301917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:45:12.758743 env[1192]: time="2025-03-17T18:45:12.758477493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0 pid=3918 runtime=io.containerd.runc.v2 Mar 17 18:45:12.807633 systemd[1]: Started cri-containerd-8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0.scope. Mar 17 18:45:12.869589 env[1192]: time="2025-03-17T18:45:12.869504989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mqmm,Uid:7450fab6-ff7c-450c-87a0-2bd33c81ff3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\"" Mar 17 18:45:12.873053 kubelet[1974]: E0317 18:45:12.872051 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:12.887315 env[1192]: time="2025-03-17T18:45:12.887227459Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:45:12.922451 env[1192]: time="2025-03-17T18:45:12.922385672Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4\"" Mar 17 18:45:12.925001 env[1192]: time="2025-03-17T18:45:12.924933090Z" level=info msg="StartContainer for \"dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4\"" Mar 17 18:45:12.969552 systemd[1]: Started cri-containerd-dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4.scope. Mar 17 18:45:13.018881 env[1192]: time="2025-03-17T18:45:13.018822184Z" level=info msg="StartContainer for \"dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4\" returns successfully" Mar 17 18:45:13.047851 systemd[1]: cri-containerd-dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4.scope: Deactivated successfully. Mar 17 18:45:13.094820 env[1192]: time="2025-03-17T18:45:13.094692312Z" level=info msg="shim disconnected" id=dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4 Mar 17 18:45:13.095373 env[1192]: time="2025-03-17T18:45:13.095288936Z" level=warning msg="cleaning up after shim disconnected" id=dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4 namespace=k8s.io Mar 17 18:45:13.095830 env[1192]: time="2025-03-17T18:45:13.095548242Z" level=info msg="cleaning up dead shim" Mar 17 18:45:13.113857 env[1192]: time="2025-03-17T18:45:13.113774238Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" Mar 17 18:45:13.316874 kubelet[1974]: E0317 18:45:13.316672 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:13.322425 env[1192]: time="2025-03-17T18:45:13.322370361Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:45:13.344020 env[1192]: time="2025-03-17T18:45:13.343891265Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92\"" Mar 17 18:45:13.345129 env[1192]: time="2025-03-17T18:45:13.345047950Z" level=info msg="StartContainer for \"807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92\"" Mar 17 18:45:13.386174 systemd[1]: Started cri-containerd-807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92.scope. Mar 17 18:45:13.450416 env[1192]: time="2025-03-17T18:45:13.450353316Z" level=info msg="StartContainer for \"807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92\" returns successfully" Mar 17 18:45:13.465174 systemd[1]: cri-containerd-807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92.scope: Deactivated successfully. Mar 17 18:45:13.501946 env[1192]: time="2025-03-17T18:45:13.501887691Z" level=info msg="shim disconnected" id=807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92 Mar 17 18:45:13.501946 env[1192]: time="2025-03-17T18:45:13.501942156Z" level=warning msg="cleaning up after shim disconnected" id=807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92 namespace=k8s.io Mar 17 18:45:13.501946 env[1192]: time="2025-03-17T18:45:13.501952113Z" level=info msg="cleaning up dead shim" Mar 17 18:45:13.517482 env[1192]: time="2025-03-17T18:45:13.517347079Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4064 runtime=io.containerd.runc.v2\n" Mar 17 18:45:13.629737 kubelet[1974]: I0317 18:45:13.629554 1974 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5d9eae-3225-4b2b-bc53-2bf88bd25b57" path="/var/lib/kubelet/pods/4d5d9eae-3225-4b2b-bc53-2bf88bd25b57/volumes" Mar 17 18:45:13.930421 kubelet[1974]: W0317 18:45:13.930263 1974 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d5d9eae_3225_4b2b_bc53_2bf88bd25b57.slice/cri-containerd-d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685.scope WatchSource:0}: container "d7f90a4b0e04e0617b929d9cee0e4f70ad28ecc8f9698db945c4eb429f31d685" in namespace "k8s.io": not found Mar 17 18:45:13.940868 kubelet[1974]: I0317 18:45:13.940808 1974 setters.go:580] "Node became not ready" node="ci-3510.3.7-8-addee6c60b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:45:13Z","lastTransitionTime":"2025-03-17T18:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:45:14.323368 kubelet[1974]: E0317 18:45:14.322940 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:14.326505 env[1192]: time="2025-03-17T18:45:14.326425532Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:45:14.358324 env[1192]: time="2025-03-17T18:45:14.358251935Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98\"" Mar 17 18:45:14.360557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107421418.mount: Deactivated successfully. Mar 17 18:45:14.361780 env[1192]: time="2025-03-17T18:45:14.361723280Z" level=info msg="StartContainer for \"e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98\"" Mar 17 18:45:14.429256 systemd[1]: Started cri-containerd-e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98.scope. Mar 17 18:45:14.477952 env[1192]: time="2025-03-17T18:45:14.477892770Z" level=info msg="StartContainer for \"e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98\" returns successfully" Mar 17 18:45:14.488038 systemd[1]: cri-containerd-e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98.scope: Deactivated successfully. Mar 17 18:45:14.527422 env[1192]: time="2025-03-17T18:45:14.527351249Z" level=info msg="shim disconnected" id=e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98 Mar 17 18:45:14.527422 env[1192]: time="2025-03-17T18:45:14.527425795Z" level=warning msg="cleaning up after shim disconnected" id=e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98 namespace=k8s.io Mar 17 18:45:14.528157 env[1192]: time="2025-03-17T18:45:14.527441781Z" level=info msg="cleaning up dead shim" Mar 17 18:45:14.543168 env[1192]: time="2025-03-17T18:45:14.543091396Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Mar 17 18:45:14.596453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98-rootfs.mount: Deactivated successfully. Mar 17 18:45:14.626170 kubelet[1974]: E0317 18:45:14.626061 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bcmn6" podUID="12fb3e7e-7199-4655-9433-67fd95c1d30f" Mar 17 18:45:15.330848 kubelet[1974]: E0317 18:45:15.329113 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:15.338758 env[1192]: time="2025-03-17T18:45:15.338679724Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:45:15.363501 env[1192]: time="2025-03-17T18:45:15.363400989Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80\"" Mar 17 18:45:15.366904 env[1192]: time="2025-03-17T18:45:15.366847228Z" level=info msg="StartContainer for \"f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80\"" Mar 17 18:45:15.369191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783454872.mount: Deactivated successfully. Mar 17 18:45:15.402965 systemd[1]: Started cri-containerd-f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80.scope. Mar 17 18:45:15.459951 systemd[1]: cri-containerd-f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80.scope: Deactivated successfully. Mar 17 18:45:15.463243 env[1192]: time="2025-03-17T18:45:15.462321476Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice/cri-containerd-f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80.scope/memory.events\": no such file or directory" Mar 17 18:45:15.467985 env[1192]: time="2025-03-17T18:45:15.467884734Z" level=info msg="StartContainer for \"f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80\" returns successfully" Mar 17 18:45:15.526902 env[1192]: time="2025-03-17T18:45:15.526826035Z" level=info msg="shim disconnected" id=f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80 Mar 17 18:45:15.527489 env[1192]: time="2025-03-17T18:45:15.527453629Z" level=warning msg="cleaning up after shim disconnected" id=f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80 namespace=k8s.io Mar 17 18:45:15.528004 env[1192]: time="2025-03-17T18:45:15.527657163Z" level=info msg="cleaning up dead shim" Mar 17 18:45:15.557855 env[1192]: time="2025-03-17T18:45:15.557788366Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4180 runtime=io.containerd.runc.v2\n" Mar 17 18:45:15.597068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80-rootfs.mount: Deactivated successfully. Mar 17 18:45:15.628172 kubelet[1974]: E0317 18:45:15.626414 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b95g5" podUID="ea4c2c13-350f-4ee1-b261-175b2cd9fd80" Mar 17 18:45:16.351051 kubelet[1974]: E0317 18:45:16.351003 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:16.360585 env[1192]: time="2025-03-17T18:45:16.357454503Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:45:16.504749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293187228.mount: Deactivated successfully. Mar 17 18:45:16.521312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147602222.mount: Deactivated successfully. Mar 17 18:45:16.532923 env[1192]: time="2025-03-17T18:45:16.532783496Z" level=info msg="CreateContainer within sandbox \"8c1a1a004173f6af609d6ac2219a66f884925027ee44a9964a925086272780f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869\"" Mar 17 18:45:16.535841 env[1192]: time="2025-03-17T18:45:16.535637995Z" level=info msg="StartContainer for \"26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869\"" Mar 17 18:45:16.590173 systemd[1]: Started cri-containerd-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869.scope. Mar 17 18:45:16.628700 kubelet[1974]: E0317 18:45:16.625671 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bcmn6" podUID="12fb3e7e-7199-4655-9433-67fd95c1d30f" Mar 17 18:45:16.670952 env[1192]: time="2025-03-17T18:45:16.670874374Z" level=info msg="StartContainer for \"26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869\" returns successfully" Mar 17 18:45:16.711873 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.M6E7B9.mount: Deactivated successfully. Mar 17 18:45:16.800585 kubelet[1974]: E0317 18:45:16.800494 1974 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:45:17.050522 kubelet[1974]: W0317 18:45:17.050328 1974 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice/cri-containerd-dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4.scope WatchSource:0}: task dbc8f84d6cbb8b30bf561e5bb06c44c9c0ceeb79a985e3f333cc7a5e0acd24f4 not found: not found Mar 17 18:45:17.358441 kubelet[1974]: E0317 18:45:17.358386 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:17.466628 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:45:17.626248 kubelet[1974]: E0317 18:45:17.626039 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b95g5" podUID="ea4c2c13-350f-4ee1-b261-175b2cd9fd80" Mar 17 18:45:18.625488 kubelet[1974]: E0317 18:45:18.625396 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bcmn6" podUID="12fb3e7e-7199-4655-9433-67fd95c1d30f" Mar 17 18:45:18.709461 kubelet[1974]: E0317 18:45:18.709405 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:19.244308 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.yksLlx.mount: Deactivated successfully. Mar 17 18:45:19.627472 kubelet[1974]: E0317 18:45:19.625614 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b95g5" podUID="ea4c2c13-350f-4ee1-b261-175b2cd9fd80" Mar 17 18:45:20.169455 kubelet[1974]: W0317 18:45:20.169396 1974 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice/cri-containerd-807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92.scope WatchSource:0}: task 807c5e26c004538baa0f346e737bbd5636a0b1b988670efc25ae1affb4b73f92 not found: not found Mar 17 18:45:20.191444 systemd[1]: Started sshd@31-146.190.61.194:22-218.92.0.158:45046.service. Mar 17 18:45:20.626181 kubelet[1974]: E0317 18:45:20.626069 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-bcmn6" podUID="12fb3e7e-7199-4655-9433-67fd95c1d30f" Mar 17 18:45:21.152777 sshd[4567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Mar 17 18:45:21.235893 systemd-networkd[1004]: lxc_health: Link UP Mar 17 18:45:21.244204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:45:21.243788 systemd-networkd[1004]: lxc_health: Gained carrier Mar 17 18:45:21.466182 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.1tJZLF.mount: Deactivated successfully. Mar 17 18:45:21.626864 kubelet[1974]: E0317 18:45:21.626781 1974 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-b95g5" podUID="ea4c2c13-350f-4ee1-b261-175b2cd9fd80" Mar 17 18:45:22.135248 update_engine[1185]: I0317 18:45:22.134672 1185 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:45:22.135248 update_engine[1185]: I0317 18:45:22.134979 1185 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:45:22.135248 update_engine[1185]: E0317 18:45:22.135089 1185 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:45:22.135248 update_engine[1185]: I0317 18:45:22.135195 1185 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:45:22.626694 kubelet[1974]: E0317 18:45:22.626629 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:22.708485 kubelet[1974]: E0317 18:45:22.708435 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:22.752916 kubelet[1974]: I0317 18:45:22.750250 1974 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4mqmm" podStartSLOduration=10.750230285 podStartE2EDuration="10.750230285s" podCreationTimestamp="2025-03-17 18:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:45:17.385684802 +0000 UTC m=+136.014121022" watchObservedRunningTime="2025-03-17 18:45:22.750230285 +0000 UTC m=+141.378666526" Mar 17 18:45:23.094892 systemd-networkd[1004]: lxc_health: Gained IPv6LL Mar 17 18:45:23.225145 sshd[4567]: Failed password for root from 218.92.0.158 port 45046 ssh2 Mar 17 18:45:23.281192 kubelet[1974]: W0317 18:45:23.281134 1974 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice/cri-containerd-e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98.scope WatchSource:0}: task e0dd42677671a0cf9b8f9b715f49a23812f63c74ca6cdbf8b72b17905bacbc98 not found: not found Mar 17 18:45:23.378208 kubelet[1974]: E0317 18:45:23.378042 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:23.628484 kubelet[1974]: E0317 18:45:23.628305 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:23.809207 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.6zdQrQ.mount: Deactivated successfully. Mar 17 18:45:24.380797 kubelet[1974]: E0317 18:45:24.380566 1974 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:45:26.076554 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.VuwKfh.mount: Deactivated successfully. Mar 17 18:45:26.390566 kubelet[1974]: W0317 18:45:26.390378 1974 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7450fab6_ff7c_450c_87a0_2bd33c81ff3a.slice/cri-containerd-f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80.scope WatchSource:0}: task f624c48b0cebfe970c02927e0e4385dd6d4171791a5433896319de2d6f86cb80 not found: not found Mar 17 18:45:27.646744 sshd[4567]: Failed password for root from 218.92.0.158 port 45046 ssh2 Mar 17 18:45:28.291005 systemd[1]: run-containerd-runc-k8s.io-26c24fc1dcae9fec33c9e99e45c7fc52ff950d0d04285f55774e9da074ade869-runc.7Hbq9a.mount: Deactivated successfully. Mar 17 18:45:28.412740 sshd[3783]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:28.417517 systemd[1]: sshd@30-146.190.61.194:22-139.178.68.195:35440.service: Deactivated successfully. Mar 17 18:45:28.418859 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:45:28.421496 systemd-logind[1183]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:45:28.423887 systemd-logind[1183]: Removed session 29. Mar 17 18:45:30.592666 sshd[4567]: Failed password for root from 218.92.0.158 port 45046 ssh2 Mar 17 18:45:30.898007 sshd[4567]: Received disconnect from 218.92.0.158 port 45046:11: [preauth] Mar 17 18:45:30.898007 sshd[4567]: Disconnected from authenticating user root 218.92.0.158 port 45046 [preauth] Mar 17 18:45:30.897911 sshd[4567]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Mar 17 18:45:30.899972 systemd[1]: sshd@31-146.190.61.194:22-218.92.0.158:45046.service: Deactivated successfully.