Dec 13 08:47:22.989009 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 08:47:22.989051 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:22.989074 kernel: BIOS-provided physical RAM map: Dec 13 08:47:22.989089 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 08:47:22.989103 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 08:47:22.989118 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 08:47:22.989138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 08:47:22.989154 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 08:47:22.989170 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 08:47:22.989215 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 08:47:22.989241 kernel: NX (Execute Disable) protection: active Dec 13 08:47:22.989253 kernel: APIC: Static calls initialized Dec 13 08:47:22.989264 kernel: SMBIOS 2.8 present. Dec 13 08:47:22.989275 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 08:47:22.989289 kernel: Hypervisor detected: KVM Dec 13 08:47:22.989313 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 08:47:22.989338 kernel: kvm-clock: using sched offset of 3541187733 cycles Dec 13 08:47:22.989360 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 08:47:22.989381 kernel: tsc: Detected 2294.608 MHz processor Dec 13 08:47:22.989404 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 08:47:22.989427 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 08:47:22.989449 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 08:47:22.989474 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 08:47:22.989495 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 08:47:22.989518 kernel: ACPI: Early table checksum verification disabled Dec 13 08:47:22.989536 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 08:47:22.989555 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989573 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989591 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989609 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 08:47:22.989627 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989645 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989663 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989685 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:47:22.989703 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 08:47:22.989721 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 08:47:22.989739 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 08:47:22.989757 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 08:47:22.989775 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 08:47:22.989793 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 08:47:22.989826 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 08:47:22.989845 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 08:47:22.989865 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 08:47:22.989884 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 08:47:22.989904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 08:47:22.989923 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 08:47:22.989942 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 08:47:22.989965 kernel: Zone ranges: Dec 13 08:47:22.989987 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 08:47:22.990008 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 08:47:22.990021 kernel: Normal empty Dec 13 08:47:22.990035 kernel: Movable zone start for each node Dec 13 08:47:22.990047 kernel: Early memory node ranges Dec 13 08:47:22.990059 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 08:47:22.990071 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 08:47:22.990083 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 08:47:22.990113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 08:47:22.990139 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 08:47:22.990159 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 08:47:22.990179 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 08:47:22.992261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 08:47:22.992283 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 08:47:22.992303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 08:47:22.992323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 08:47:22.992342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 08:47:22.992369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 08:47:22.992390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 08:47:22.992405 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 08:47:22.992418 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 08:47:22.992431 kernel: TSC deadline timer available Dec 13 08:47:22.992451 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 08:47:22.992471 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 08:47:22.992490 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 08:47:22.992516 kernel: Booting paravirtualized kernel on KVM Dec 13 08:47:22.992532 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 08:47:22.992553 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 08:47:22.992566 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 08:47:22.992582 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 08:47:22.992597 kernel: pcpu-alloc: [0] 0 1 Dec 13 08:47:22.992611 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 08:47:22.992628 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:22.992641 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:47:22.992657 kernel: random: crng init done Dec 13 08:47:22.992685 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:47:22.992705 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 08:47:22.992724 kernel: Fallback order for Node 0: 0 Dec 13 08:47:22.992744 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 08:47:22.992763 kernel: Policy zone: DMA32 Dec 13 08:47:22.992782 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:47:22.992802 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 08:47:22.992822 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:47:22.992845 kernel: Kernel/User page tables isolation: enabled Dec 13 08:47:22.992865 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 08:47:22.992884 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 08:47:22.992904 kernel: Dynamic Preempt: voluntary Dec 13 08:47:22.992923 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:47:22.992944 kernel: rcu: RCU event tracing is enabled. Dec 13 08:47:22.992964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:47:22.992983 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:47:22.993002 kernel: Rude variant of Tasks RCU enabled. Dec 13 08:47:22.993022 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:47:22.993045 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:47:22.993074 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:47:22.993088 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 08:47:22.993108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:47:22.993120 kernel: Console: colour VGA+ 80x25 Dec 13 08:47:22.993132 kernel: printk: console [tty0] enabled Dec 13 08:47:22.993144 kernel: printk: console [ttyS0] enabled Dec 13 08:47:22.993157 kernel: ACPI: Core revision 20230628 Dec 13 08:47:22.993170 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 08:47:22.995253 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 08:47:22.995283 kernel: x2apic enabled Dec 13 08:47:22.995304 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 08:47:22.995324 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 08:47:22.995349 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:47:22.995370 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 13 08:47:22.995389 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 08:47:22.995409 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 08:47:22.995451 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 08:47:22.995487 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 08:47:22.995512 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 08:47:22.995536 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 08:47:22.995557 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 08:47:22.995578 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 08:47:22.995598 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 08:47:22.995619 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 08:47:22.995641 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 08:47:22.995672 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 08:47:22.995693 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 08:47:22.995713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 08:47:22.995734 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 08:47:22.995755 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 08:47:22.995776 kernel: Freeing SMP alternatives memory: 32K Dec 13 08:47:22.995797 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:47:22.995818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:47:22.995843 kernel: landlock: Up and running. Dec 13 08:47:22.995864 kernel: SELinux: Initializing. Dec 13 08:47:22.995885 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:22.995906 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 08:47:22.995927 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 08:47:22.995947 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:22.995991 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:22.996018 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:47:22.996044 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 08:47:22.996066 kernel: signal: max sigframe size: 1776 Dec 13 08:47:22.996078 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:47:22.996094 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:47:22.996116 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 08:47:22.996137 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:47:22.996158 kernel: smpboot: x86: Booting SMP configuration: Dec 13 08:47:22.996211 kernel: .... node #0, CPUs: #1 Dec 13 08:47:22.996241 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:47:22.996263 kernel: smpboot: Max logical packages: 1 Dec 13 08:47:22.996297 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Dec 13 08:47:22.996314 kernel: devtmpfs: initialized Dec 13 08:47:22.996328 kernel: x86/mm: Memory block size: 128MB Dec 13 08:47:22.996341 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:47:22.996356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:47:22.996383 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:47:22.996409 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:47:22.996430 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:47:22.996451 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:47:22.996477 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 08:47:22.996499 kernel: audit: type=2000 audit(1734079641.665:1): state=initialized audit_enabled=0 res=1 Dec 13 08:47:22.996526 kernel: cpuidle: using governor menu Dec 13 08:47:22.996540 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:47:22.996554 kernel: dca service started, version 1.12.1 Dec 13 08:47:22.996568 kernel: PCI: Using configuration type 1 for base access Dec 13 08:47:22.996582 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 08:47:22.996594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:47:22.996607 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:47:22.996626 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:47:22.996643 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:47:22.996670 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:47:22.996691 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:47:22.996711 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:47:22.996732 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 08:47:22.996753 kernel: ACPI: Interpreter enabled Dec 13 08:47:22.996774 kernel: ACPI: PM: (supports S0 S5) Dec 13 08:47:22.996795 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 08:47:22.996821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 08:47:22.996842 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 08:47:22.996869 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 08:47:22.996884 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:47:22.998268 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:47:22.998472 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 08:47:22.998647 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 08:47:22.998681 kernel: acpiphp: Slot [3] registered Dec 13 08:47:22.998704 kernel: acpiphp: Slot [4] registered Dec 13 08:47:22.998725 kernel: acpiphp: Slot [5] registered Dec 13 08:47:22.998746 kernel: acpiphp: Slot [6] registered Dec 13 08:47:22.998767 kernel: acpiphp: Slot [7] registered Dec 13 08:47:22.998788 kernel: acpiphp: Slot [8] registered Dec 13 08:47:22.998815 kernel: acpiphp: Slot [9] registered Dec 13 08:47:22.998831 kernel: acpiphp: Slot [10] registered Dec 13 08:47:22.998846 kernel: acpiphp: Slot [11] registered Dec 13 08:47:22.998865 kernel: acpiphp: Slot [12] registered Dec 13 08:47:23.000230 kernel: acpiphp: Slot [13] registered Dec 13 08:47:23.000258 kernel: acpiphp: Slot [14] registered Dec 13 08:47:23.000279 kernel: acpiphp: Slot [15] registered Dec 13 08:47:23.000300 kernel: acpiphp: Slot [16] registered Dec 13 08:47:23.000321 kernel: acpiphp: Slot [17] registered Dec 13 08:47:23.000342 kernel: acpiphp: Slot [18] registered Dec 13 08:47:23.000363 kernel: acpiphp: Slot [19] registered Dec 13 08:47:23.000383 kernel: acpiphp: Slot [20] registered Dec 13 08:47:23.000404 kernel: acpiphp: Slot [21] registered Dec 13 08:47:23.000433 kernel: acpiphp: Slot [22] registered Dec 13 08:47:23.000454 kernel: acpiphp: Slot [23] registered Dec 13 08:47:23.000479 kernel: acpiphp: Slot [24] registered Dec 13 08:47:23.000502 kernel: acpiphp: Slot [25] registered Dec 13 08:47:23.000522 kernel: acpiphp: Slot [26] registered Dec 13 08:47:23.000543 kernel: acpiphp: Slot [27] registered Dec 13 08:47:23.000564 kernel: acpiphp: Slot [28] registered Dec 13 08:47:23.000584 kernel: acpiphp: Slot [29] registered Dec 13 08:47:23.000605 kernel: acpiphp: Slot [30] registered Dec 13 08:47:23.000631 kernel: acpiphp: Slot [31] registered Dec 13 08:47:23.000651 kernel: PCI host bridge to bus 0000:00 Dec 13 08:47:23.000860 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 08:47:23.000962 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 08:47:23.001057 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 08:47:23.001148 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 08:47:23.001252 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 08:47:23.001343 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:47:23.001519 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 08:47:23.001685 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 08:47:23.001845 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 08:47:23.001993 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 08:47:23.002144 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 08:47:23.006435 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 08:47:23.006630 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 08:47:23.006800 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 08:47:23.007009 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 08:47:23.007172 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 08:47:23.007439 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 08:47:23.007669 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 08:47:23.007838 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 08:47:23.008016 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 08:47:23.008172 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 08:47:23.012366 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 08:47:23.012550 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 08:47:23.012719 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 08:47:23.012886 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 08:47:23.013095 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:23.013279 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 08:47:23.013447 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 08:47:23.013606 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 08:47:23.013792 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 08:47:23.013960 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 08:47:23.014122 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 08:47:23.015514 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 08:47:23.015715 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 08:47:23.015883 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 08:47:23.016044 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 08:47:23.016219 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 08:47:23.016410 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:23.016577 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 08:47:23.016751 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 08:47:23.016912 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 08:47:23.017086 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 08:47:23.020239 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 08:47:23.020433 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 08:47:23.020594 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 08:47:23.020807 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 08:47:23.020990 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 08:47:23.021152 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 08:47:23.021175 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 08:47:23.021210 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 08:47:23.021228 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 08:47:23.021246 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 08:47:23.021264 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 08:47:23.021288 kernel: iommu: Default domain type: Translated Dec 13 08:47:23.021305 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 08:47:23.021322 kernel: PCI: Using ACPI for IRQ routing Dec 13 08:47:23.021340 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 08:47:23.021358 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 08:47:23.021375 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 08:47:23.021544 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 08:47:23.021709 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 08:47:23.021878 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 08:47:23.021901 kernel: vgaarb: loaded Dec 13 08:47:23.021919 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 08:47:23.021937 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 08:47:23.021955 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 08:47:23.021973 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:47:23.021991 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:47:23.022009 kernel: pnp: PnP ACPI init Dec 13 08:47:23.022026 kernel: pnp: PnP ACPI: found 4 devices Dec 13 08:47:23.022049 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 08:47:23.022066 kernel: NET: Registered PF_INET protocol family Dec 13 08:47:23.022084 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:47:23.022102 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 08:47:23.022120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:47:23.022137 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 08:47:23.022156 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 08:47:23.022174 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 08:47:23.024312 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:23.024347 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 08:47:23.024367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:47:23.024385 kernel: NET: Registered PF_XDP protocol family Dec 13 08:47:23.024584 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 08:47:23.024734 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 08:47:23.024884 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 08:47:23.025029 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 08:47:23.025173 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 08:47:23.026440 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 08:47:23.026617 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 08:47:23.026642 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 08:47:23.026805 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 36750 usecs Dec 13 08:47:23.026829 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:47:23.026845 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 08:47:23.026858 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 08:47:23.026872 kernel: Initialise system trusted keyrings Dec 13 08:47:23.026895 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 08:47:23.026921 kernel: Key type asymmetric registered Dec 13 08:47:23.026945 kernel: Asymmetric key parser 'x509' registered Dec 13 08:47:23.026966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 08:47:23.026987 kernel: io scheduler mq-deadline registered Dec 13 08:47:23.027008 kernel: io scheduler kyber registered Dec 13 08:47:23.027030 kernel: io scheduler bfq registered Dec 13 08:47:23.027050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 08:47:23.027072 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 08:47:23.027093 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 08:47:23.027117 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 08:47:23.027138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:47:23.027159 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 08:47:23.027180 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 08:47:23.028254 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 08:47:23.028266 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 08:47:23.028292 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 08:47:23.028503 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 08:47:23.028653 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 08:47:23.028794 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T08:47:22 UTC (1734079642) Dec 13 08:47:23.028930 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 08:47:23.028955 kernel: intel_pstate: CPU model not supported Dec 13 08:47:23.028976 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:47:23.028997 kernel: Segment Routing with IPv6 Dec 13 08:47:23.029019 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:47:23.029039 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:47:23.029065 kernel: Key type dns_resolver registered Dec 13 08:47:23.029085 kernel: IPI shorthand broadcast: enabled Dec 13 08:47:23.029106 kernel: sched_clock: Marking stable (1168003605, 163952706)->(1367625389, -35669078) Dec 13 08:47:23.029127 kernel: registered taskstats version 1 Dec 13 08:47:23.029148 kernel: Loading compiled-in X.509 certificates Dec 13 08:47:23.029169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 08:47:23.031224 kernel: Key type .fscrypt registered Dec 13 08:47:23.031247 kernel: Key type fscrypt-provisioning registered Dec 13 08:47:23.031262 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:47:23.031285 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:47:23.031305 kernel: ima: No architecture policies found Dec 13 08:47:23.031325 kernel: clk: Disabling unused clocks Dec 13 08:47:23.031346 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 08:47:23.031368 kernel: Write protecting the kernel read-only data: 36864k Dec 13 08:47:23.031413 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 08:47:23.031439 kernel: Run /init as init process Dec 13 08:47:23.031461 kernel: with arguments: Dec 13 08:47:23.031497 kernel: /init Dec 13 08:47:23.031510 kernel: with environment: Dec 13 08:47:23.031520 kernel: HOME=/ Dec 13 08:47:23.031530 kernel: TERM=linux Dec 13 08:47:23.031539 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:47:23.031557 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:23.031570 systemd[1]: Detected virtualization kvm. Dec 13 08:47:23.031581 systemd[1]: Detected architecture x86-64. Dec 13 08:47:23.031591 systemd[1]: Running in initrd. Dec 13 08:47:23.031604 systemd[1]: No hostname configured, using default hostname. Dec 13 08:47:23.031614 systemd[1]: Hostname set to . Dec 13 08:47:23.031625 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:23.031635 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:47:23.031645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:23.031655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:23.031667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:47:23.031677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:23.031690 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:47:23.031701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:47:23.031713 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:47:23.031724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:47:23.031734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:23.031744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:23.031757 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:47:23.031768 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:23.031778 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:23.031791 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:47:23.031801 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:23.031812 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:23.031825 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:47:23.031836 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:47:23.031846 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:23.031857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:23.031867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:23.031877 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:47:23.031888 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:47:23.031898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:23.031911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:47:23.031921 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:47:23.031932 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:23.031942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:23.031952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:23.031963 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:23.031973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:23.032019 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 08:47:23.032048 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:47:23.032059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:47:23.032074 systemd-journald[184]: Journal started Dec 13 08:47:23.032096 systemd-journald[184]: Runtime Journal (/run/log/journal/6a5897d6e42b42f19ebde63fdb523b30) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:23.027245 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 08:47:23.072938 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:23.073003 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:47:23.073031 kernel: Bridge firewalling registered Dec 13 08:47:23.073058 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:47:23.069457 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 08:47:23.077915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:23.078797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:23.086423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:23.092456 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:47:23.097786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:47:23.100584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:47:23.108051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:23.123984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:23.125989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:23.127543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:23.134461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:47:23.138434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:47:23.155259 dracut-cmdline[217]: dracut-dracut-053 Dec 13 08:47:23.161032 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 08:47:23.189841 systemd-resolved[220]: Positive Trust Anchors: Dec 13 08:47:23.189861 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:47:23.189900 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:47:23.193038 systemd-resolved[220]: Defaulting to hostname 'linux'. Dec 13 08:47:23.194301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:47:23.194911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:23.284226 kernel: SCSI subsystem initialized Dec 13 08:47:23.295247 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:47:23.308224 kernel: iscsi: registered transport (tcp) Dec 13 08:47:23.333251 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:47:23.333357 kernel: QLogic iSCSI HBA Driver Dec 13 08:47:23.387741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:23.400513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:47:23.434263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:47:23.434361 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:47:23.434392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:47:23.481262 kernel: raid6: avx2x4 gen() 16129 MB/s Dec 13 08:47:23.498239 kernel: raid6: avx2x2 gen() 17242 MB/s Dec 13 08:47:23.515526 kernel: raid6: avx2x1 gen() 13308 MB/s Dec 13 08:47:23.515626 kernel: raid6: using algorithm avx2x2 gen() 17242 MB/s Dec 13 08:47:23.535261 kernel: raid6: .... xor() 18372 MB/s, rmw enabled Dec 13 08:47:23.535350 kernel: raid6: using avx2x2 recovery algorithm Dec 13 08:47:23.559227 kernel: xor: automatically using best checksumming function avx Dec 13 08:47:23.740242 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:47:23.754274 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:23.761430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:23.789507 systemd-udevd[403]: Using default interface naming scheme 'v255'. Dec 13 08:47:23.795074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:23.805494 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:47:23.823560 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 08:47:23.863699 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:23.868424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:23.945521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:23.956572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:47:23.984141 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:23.987020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:23.987743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:23.989492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:24.001382 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:47:24.019516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:24.095888 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 08:47:24.098211 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 08:47:24.192400 kernel: ACPI: bus type USB registered Dec 13 08:47:24.192436 kernel: usbcore: registered new interface driver usbfs Dec 13 08:47:24.192462 kernel: libata version 3.00 loaded. Dec 13 08:47:24.192499 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:47:24.192734 kernel: usbcore: registered new interface driver hub Dec 13 08:47:24.192754 kernel: usbcore: registered new device driver usb Dec 13 08:47:24.192772 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 08:47:24.192931 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 08:47:24.193051 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 08:47:24.193065 kernel: scsi host1: ata_piix Dec 13 08:47:24.193255 kernel: AES CTR mode by8 optimization enabled Dec 13 08:47:24.193270 kernel: scsi host2: ata_piix Dec 13 08:47:24.193394 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:47:24.193408 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 08:47:24.193421 kernel: GPT:9289727 != 125829119 Dec 13 08:47:24.193434 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 08:47:24.193446 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:47:24.193459 kernel: GPT:9289727 != 125829119 Dec 13 08:47:24.193476 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:47:24.193489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:24.193501 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 08:47:24.208262 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Dec 13 08:47:24.162975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:24.163080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:24.164209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:24.165077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:24.165317 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:24.166572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:24.172852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:24.257244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:24.276558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:47:24.295599 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:24.354222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Dec 13 08:47:24.359243 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (459) Dec 13 08:47:24.367249 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 08:47:24.373916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 08:47:24.382176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:47:24.386769 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 08:47:24.387550 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 08:47:24.394687 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 08:47:24.399289 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 08:47:24.399536 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 08:47:24.399672 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 08:47:24.399799 kernel: hub 1-0:1.0: USB hub found Dec 13 08:47:24.399938 kernel: hub 1-0:1.0: 2 ports detected Dec 13 08:47:24.396528 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:47:24.408925 disk-uuid[548]: Primary Header is updated. Dec 13 08:47:24.408925 disk-uuid[548]: Secondary Entries is updated. Dec 13 08:47:24.408925 disk-uuid[548]: Secondary Header is updated. Dec 13 08:47:24.415220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:24.423266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:25.427213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 08:47:25.428257 disk-uuid[549]: The operation has completed successfully. Dec 13 08:47:25.482696 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:47:25.483612 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:47:25.504426 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:47:25.510310 sh[560]: Success Dec 13 08:47:25.529552 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 08:47:25.615972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:47:25.619404 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:47:25.620993 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:47:25.651316 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 08:47:25.651398 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:25.654544 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:47:25.654627 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:47:25.655941 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:47:25.666351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:47:25.667757 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:47:25.673547 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:47:25.682553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:47:25.704514 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:25.704585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:25.705443 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:25.711244 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:25.723831 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:47:25.725565 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:25.733424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:47:25.741449 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:47:25.816485 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:25.826553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:47:25.877629 systemd-networkd[745]: lo: Link UP Dec 13 08:47:25.878351 systemd-networkd[745]: lo: Gained carrier Dec 13 08:47:25.881609 systemd-networkd[745]: Enumeration completed Dec 13 08:47:25.881750 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:47:25.882146 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:25.882152 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 08:47:25.883364 systemd[1]: Reached target network.target - Network. Dec 13 08:47:25.883408 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:25.883414 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:47:25.884418 systemd-networkd[745]: eth0: Link UP Dec 13 08:47:25.884425 systemd-networkd[745]: eth0: Gained carrier Dec 13 08:47:25.884438 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 08:47:25.892380 systemd-networkd[745]: eth1: Link UP Dec 13 08:47:25.892385 systemd-networkd[745]: eth1: Gained carrier Dec 13 08:47:25.892400 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:47:25.903533 ignition[664]: Ignition 2.19.0 Dec 13 08:47:25.903546 ignition[664]: Stage: fetch-offline Dec 13 08:47:25.904261 systemd-networkd[745]: eth0: DHCPv4 address 143.198.66.7/20, gateway 143.198.64.1 acquired from 169.254.169.253 Dec 13 08:47:25.903609 ignition[664]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:25.903623 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:25.906511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:25.903764 ignition[664]: parsed url from cmdline: "" Dec 13 08:47:25.909280 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.6/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 08:47:25.903769 ignition[664]: no config URL provided Dec 13 08:47:25.903776 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:25.903790 ignition[664]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:25.903798 ignition[664]: failed to fetch config: resource requires networking Dec 13 08:47:25.905438 ignition[664]: Ignition finished successfully Dec 13 08:47:25.913438 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:47:25.942213 ignition[752]: Ignition 2.19.0 Dec 13 08:47:25.942225 ignition[752]: Stage: fetch Dec 13 08:47:25.942441 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:25.942454 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:25.942581 ignition[752]: parsed url from cmdline: "" Dec 13 08:47:25.942586 ignition[752]: no config URL provided Dec 13 08:47:25.942593 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:47:25.942602 ignition[752]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:47:25.942624 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 08:47:25.959166 ignition[752]: GET result: OK Dec 13 08:47:25.960125 ignition[752]: parsing config with SHA512: d64fec7290758551fe7c2f726cfd130232576011232c76a361fe13523bd3e002a5d2e1a1d96347923af19c53d5e8896623d90c7ee29c6b631177a3c161c8c635 Dec 13 08:47:25.966345 unknown[752]: fetched base config from "system" Dec 13 08:47:25.967277 ignition[752]: fetch: fetch complete Dec 13 08:47:25.966369 unknown[752]: fetched base config from "system" Dec 13 08:47:25.967285 ignition[752]: fetch: fetch passed Dec 13 08:47:25.966378 unknown[752]: fetched user config from "digitalocean" Dec 13 08:47:25.967354 ignition[752]: Ignition finished successfully Dec 13 08:47:25.972159 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:47:25.980557 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:47:26.005715 ignition[759]: Ignition 2.19.0 Dec 13 08:47:26.005729 ignition[759]: Stage: kargs Dec 13 08:47:26.006003 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:26.006020 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:26.008307 ignition[759]: kargs: kargs passed Dec 13 08:47:26.008409 ignition[759]: Ignition finished successfully Dec 13 08:47:26.010664 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:47:26.017423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:47:26.052965 ignition[765]: Ignition 2.19.0 Dec 13 08:47:26.052978 ignition[765]: Stage: disks Dec 13 08:47:26.053233 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:26.053248 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:26.055086 ignition[765]: disks: disks passed Dec 13 08:47:26.056413 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:47:26.055155 ignition[765]: Ignition finished successfully Dec 13 08:47:26.061843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:26.062995 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:47:26.064288 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:26.065618 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:47:26.066800 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:47:26.073412 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:47:26.092950 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 08:47:26.097905 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:47:26.109734 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:47:26.234230 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 08:47:26.235664 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:47:26.237040 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:47:26.243366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:26.257501 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:47:26.263882 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 08:47:26.268871 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:47:26.273336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:47:26.275583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:26.284233 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (781) Dec 13 08:47:26.287768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:47:26.298293 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:26.302209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:26.302300 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:26.306229 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:26.307348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:47:26.311504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:26.387234 coreos-metadata[784]: Dec 13 08:47:26.387 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:26.399972 coreos-metadata[783]: Dec 13 08:47:26.399 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:26.403703 coreos-metadata[784]: Dec 13 08:47:26.403 INFO Fetch successful Dec 13 08:47:26.412220 coreos-metadata[784]: Dec 13 08:47:26.411 INFO wrote hostname ci-4081.2.1-b-2d211b5e28 to /sysroot/etc/hostname Dec 13 08:47:26.416722 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:26.418437 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:47:26.421313 coreos-metadata[783]: Dec 13 08:47:26.420 INFO Fetch successful Dec 13 08:47:26.430123 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:47:26.432315 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 08:47:26.433438 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 08:47:26.438077 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:47:26.446067 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:47:26.578041 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:26.586446 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:47:26.590482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:47:26.607283 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:26.647303 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:47:26.652015 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:47:26.655061 ignition[902]: INFO : Ignition 2.19.0 Dec 13 08:47:26.655061 ignition[902]: INFO : Stage: mount Dec 13 08:47:26.656861 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:26.656861 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:26.658711 ignition[902]: INFO : mount: mount passed Dec 13 08:47:26.658711 ignition[902]: INFO : Ignition finished successfully Dec 13 08:47:26.659094 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:47:26.666445 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:47:26.702590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:47:26.719237 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (914) Dec 13 08:47:26.724300 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 08:47:26.724435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 08:47:26.724457 kernel: BTRFS info (device vda6): using free space tree Dec 13 08:47:26.730250 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 08:47:26.733989 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:47:26.777039 ignition[931]: INFO : Ignition 2.19.0 Dec 13 08:47:26.777039 ignition[931]: INFO : Stage: files Dec 13 08:47:26.778608 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:26.778608 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:26.780228 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:47:26.780996 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:47:26.780996 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:47:26.784602 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:47:26.785715 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:47:26.785715 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:47:26.785272 unknown[931]: wrote ssh authorized keys file for user: core Dec 13 08:47:26.788962 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:47:26.788962 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:47:26.788962 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:26.788962 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 08:47:26.822034 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 08:47:26.897371 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 08:47:26.897371 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 08:47:26.900177 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 08:47:27.252676 systemd-networkd[745]: eth1: Gained IPv6LL Dec 13 08:47:27.454564 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 08:47:27.572412 systemd-networkd[745]: eth0: Gained IPv6LL Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:47:27.657960 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:27.667782 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 08:47:28.082506 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 08:47:28.342780 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 08:47:28.342780 ignition[931]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:47:28.345331 ignition[931]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:28.345331 ignition[931]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:47:28.345331 ignition[931]: INFO : files: files passed Dec 13 08:47:28.345331 ignition[931]: INFO : Ignition finished successfully Dec 13 08:47:28.346453 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:47:28.355531 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:47:28.360416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:47:28.365900 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:47:28.366491 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:47:28.387907 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:28.387907 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:28.391698 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:47:28.395341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:28.398032 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:47:28.403643 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:47:28.467542 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:47:28.467755 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:47:28.469726 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:47:28.470723 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:47:28.472055 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:47:28.486695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:47:28.510130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:28.518587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:47:28.547657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:28.548660 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:28.550054 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:47:28.551259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:47:28.551586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:47:28.553029 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:47:28.554506 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:47:28.555614 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:47:28.556721 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:47:28.557818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:47:28.559052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:47:28.560404 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:47:28.561646 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:47:28.562849 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:47:28.563940 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:47:28.564924 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:47:28.565157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:47:28.566453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:28.567368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:28.568507 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:47:28.568693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:28.569736 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:47:28.569960 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:47:28.571463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:47:28.571718 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:47:28.573574 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:47:28.573790 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:47:28.574626 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:47:28.574830 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:47:28.591247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:47:28.592100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:47:28.592448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:28.596564 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:47:28.597145 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:47:28.597463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:28.608367 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:47:28.608529 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:47:28.619638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:47:28.620587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:47:28.632404 ignition[984]: INFO : Ignition 2.19.0 Dec 13 08:47:28.632404 ignition[984]: INFO : Stage: umount Dec 13 08:47:28.635315 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:47:28.635315 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 08:47:28.635315 ignition[984]: INFO : umount: umount passed Dec 13 08:47:28.635315 ignition[984]: INFO : Ignition finished successfully Dec 13 08:47:28.635905 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:47:28.636077 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:47:28.637695 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:47:28.637842 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:47:28.641572 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:47:28.641668 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:47:28.643517 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:47:28.643689 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:47:28.646694 systemd[1]: Stopped target network.target - Network. Dec 13 08:47:28.647232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:47:28.647318 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:47:28.647983 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:47:28.648764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:47:28.653358 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:28.654387 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:47:28.655774 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:47:28.657023 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:47:28.657095 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:47:28.658340 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:47:28.658406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:47:28.659476 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:47:28.659572 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:47:28.661041 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:47:28.661136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:47:28.662518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:47:28.664037 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:47:28.666823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:47:28.667917 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:47:28.668055 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:47:28.668468 systemd-networkd[745]: eth0: DHCPv6 lease lost Dec 13 08:47:28.670258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:47:28.670385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:47:28.675357 systemd-networkd[745]: eth1: DHCPv6 lease lost Dec 13 08:47:28.675670 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:47:28.675826 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:47:28.679855 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:47:28.680541 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:47:28.684060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:47:28.684162 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:28.690487 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:47:28.691601 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:47:28.691710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:47:28.694293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:47:28.694390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:28.695653 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:47:28.695727 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:28.696401 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:47:28.696470 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:28.700675 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:28.716890 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:47:28.717144 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:28.718749 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:47:28.718895 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:47:28.720568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:47:28.720677 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:28.721631 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:47:28.721685 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:28.722773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:47:28.722841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:47:28.724559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:47:28.724632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:47:28.725941 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:47:28.726011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:47:28.733572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:47:28.734380 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:47:28.734485 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:28.735908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:28.735987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:28.746928 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:47:28.747076 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:47:28.748975 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:47:28.753483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:47:28.771069 systemd[1]: Switching root. Dec 13 08:47:28.843683 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 08:47:28.843834 systemd-journald[184]: Journal stopped Dec 13 08:47:30.575000 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:47:30.575094 kernel: SELinux: policy capability open_perms=1 Dec 13 08:47:30.575120 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:47:30.575145 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:47:30.575175 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:47:30.575471 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:47:30.575503 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:47:30.575535 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:47:30.575567 kernel: audit: type=1403 audit(1734079649.114:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:47:30.575602 systemd[1]: Successfully loaded SELinux policy in 48.042ms. Dec 13 08:47:30.575643 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.609ms. Dec 13 08:47:30.575673 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:47:30.575700 systemd[1]: Detected virtualization kvm. Dec 13 08:47:30.575728 systemd[1]: Detected architecture x86-64. Dec 13 08:47:30.575760 systemd[1]: Detected first boot. Dec 13 08:47:30.575788 systemd[1]: Hostname set to . Dec 13 08:47:30.575815 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:47:30.575845 zram_generator::config[1043]: No configuration found. Dec 13 08:47:30.575873 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:47:30.575901 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:47:30.575937 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 08:47:30.575973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:47:30.576218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:47:30.576260 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:47:30.576289 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:47:30.576316 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:47:30.576372 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:47:30.576401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:47:30.576434 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:47:30.576461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:47:30.576496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:47:30.576523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:47:30.576550 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:47:30.576578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:47:30.576606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:47:30.576633 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 08:47:30.576659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:47:30.576686 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:47:30.576711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:47:30.576743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:47:30.576766 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:47:30.576790 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:47:30.576813 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:47:30.576844 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:47:30.576872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:47:30.576912 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:47:30.576947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:47:30.576993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:47:30.577036 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:47:30.577067 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:47:30.577096 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:47:30.577124 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:47:30.577152 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:47:30.579302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:30.579406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:47:30.579443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:47:30.579473 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:47:30.579505 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:47:30.579533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:30.579562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:47:30.579593 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:47:30.579621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:30.579647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:47:30.579678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:30.579709 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:47:30.579740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:30.579768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:47:30.579797 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 08:47:30.579825 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 08:47:30.579852 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:47:30.579880 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:47:30.579923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:47:30.579951 kernel: fuse: init (API version 7.39) Dec 13 08:47:30.579978 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:47:30.580007 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:47:30.580037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:30.580058 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:47:30.580077 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:47:30.580096 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:47:30.580115 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:47:30.580147 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:47:30.580177 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:47:30.587127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:47:30.587178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:47:30.588320 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:47:30.588369 kernel: loop: module loaded Dec 13 08:47:30.588401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:30.588436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:30.588479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:30.588512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:30.588547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:47:30.588576 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:47:30.588611 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:30.588647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:30.588683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:47:30.588722 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:47:30.588757 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:47:30.588793 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:47:30.588874 systemd-journald[1134]: Collecting audit messages is disabled. Dec 13 08:47:30.588968 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:47:30.589003 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:47:30.589039 kernel: ACPI: bus type drm_connector registered Dec 13 08:47:30.589064 systemd-journald[1134]: Journal started Dec 13 08:47:30.589127 systemd-journald[1134]: Runtime Journal (/run/log/journal/6a5897d6e42b42f19ebde63fdb523b30) is 4.9M, max 39.3M, 34.4M free. Dec 13 08:47:30.600344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:47:30.618232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:47:30.626222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:30.639276 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:47:30.647234 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:30.659213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:47:30.679234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:47:30.686216 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:47:30.698825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:47:30.701059 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:47:30.701451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:47:30.704044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:47:30.704855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:47:30.705560 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:47:30.707102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:47:30.749023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:47:30.751446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:47:30.761465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:47:30.767763 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:47:30.779638 systemd-journald[1134]: Time spent on flushing to /var/log/journal/6a5897d6e42b42f19ebde63fdb523b30 is 40.534ms for 983 entries. Dec 13 08:47:30.779638 systemd-journald[1134]: System Journal (/var/log/journal/6a5897d6e42b42f19ebde63fdb523b30) is 8.0M, max 195.6M, 187.6M free. Dec 13 08:47:30.844504 systemd-journald[1134]: Received client request to flush runtime journal. Dec 13 08:47:30.782903 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 08:47:30.782951 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Dec 13 08:47:30.801623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:47:30.816600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:47:30.838508 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:47:30.849858 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:47:30.886014 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:47:30.898608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:47:30.934029 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 08:47:30.934607 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 08:47:30.943991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:47:32.014370 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:47:32.032555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:47:32.069165 systemd-udevd[1214]: Using default interface naming scheme 'v255'. Dec 13 08:47:32.104776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:47:32.113511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:47:32.149820 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:47:32.202212 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1227) Dec 13 08:47:32.218246 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1227) Dec 13 08:47:32.271219 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 08:47:32.280217 kernel: ACPI: button: Power Button [PWRF] Dec 13 08:47:32.302214 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 08:47:32.310050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:47:32.314880 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 08:47:32.351206 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 08:47:32.364219 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 08:47:32.367204 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 08:47:32.373395 kernel: Console: switching to colour dummy device 80x25 Dec 13 08:47:32.375208 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:47:32.375282 kernel: [drm] features: -context_init Dec 13 08:47:32.378210 kernel: [drm] number of scanouts: 1 Dec 13 08:47:32.379204 kernel: [drm] number of cap sets: 0 Dec 13 08:47:32.384206 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 08:47:32.391215 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 08:47:32.393676 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 08:47:32.401206 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:47:32.438890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:32.439345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:32.459538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:32.467549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:32.521410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1230) Dec 13 08:47:32.524640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:32.525809 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:47:32.525878 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:47:32.525962 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:32.543998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:32.544302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:32.546719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:32.546997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:32.551745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:32.559985 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:32.560596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:32.568908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:32.630993 systemd-networkd[1219]: lo: Link UP Dec 13 08:47:32.631005 systemd-networkd[1219]: lo: Gained carrier Dec 13 08:47:32.638423 systemd-networkd[1219]: Enumeration completed Dec 13 08:47:32.638610 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:47:32.642796 systemd-networkd[1219]: eth0: Configuring with /run/systemd/network/10-2e:67:1f:3f:84:ab.network. Dec 13 08:47:32.643599 systemd-networkd[1219]: eth1: Configuring with /run/systemd/network/10-26:6e:0d:1f:43:db.network. Dec 13 08:47:32.644157 systemd-networkd[1219]: eth0: Link UP Dec 13 08:47:32.644162 systemd-networkd[1219]: eth0: Gained carrier Dec 13 08:47:32.645455 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:47:32.649637 systemd-networkd[1219]: eth1: Link UP Dec 13 08:47:32.649643 systemd-networkd[1219]: eth1: Gained carrier Dec 13 08:47:32.708207 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:47:32.734220 kernel: EDAC MC: Ver: 3.0.0 Dec 13 08:47:32.731753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:32.743559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 08:47:32.759864 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:32.761492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:32.769431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:32.777045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:47:32.777881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:32.798573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:47:32.800282 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:47:32.812573 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:47:32.836417 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:47:32.867809 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:47:32.870709 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:47:32.879500 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:47:32.884132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:47:32.905308 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:47:32.945989 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:47:32.950809 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:47:32.959395 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 08:47:32.963748 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:47:32.963809 systemd[1]: Reached target machines.target - Containers. Dec 13 08:47:32.974525 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:47:32.992213 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 08:47:32.995605 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 08:47:32.997876 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:47:33.001254 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:47:33.009408 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:47:33.012385 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:47:33.013266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:33.018433 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:47:33.029550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:47:33.036443 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:47:33.045354 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:47:33.064060 kernel: loop0: detected capacity change from 0 to 8 Dec 13 08:47:33.073986 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:47:33.075390 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:47:33.083569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:47:33.101632 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 08:47:33.167238 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 08:47:33.216502 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 08:47:33.280344 kernel: loop4: detected capacity change from 0 to 8 Dec 13 08:47:33.286601 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 08:47:33.319227 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 08:47:33.340675 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 08:47:33.363136 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 08:47:33.364015 (sd-merge)[1308]: Merged extensions into '/usr'. Dec 13 08:47:33.372332 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:47:33.372552 systemd[1]: Reloading... Dec 13 08:47:33.503218 zram_generator::config[1339]: No configuration found. Dec 13 08:47:33.697293 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:47:33.756953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:47:33.850629 systemd[1]: Reloading finished in 477 ms. Dec 13 08:47:33.875814 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:47:33.877919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:47:33.897542 systemd[1]: Starting ensure-sysext.service... Dec 13 08:47:33.902458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:47:33.925451 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:47:33.925486 systemd[1]: Reloading... Dec 13 08:47:33.965020 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:47:33.966515 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:47:33.968599 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:47:33.969031 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Dec 13 08:47:33.969171 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Dec 13 08:47:33.973735 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:47:33.973894 systemd-tmpfiles[1387]: Skipping /boot Dec 13 08:47:33.991768 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:47:33.991991 systemd-tmpfiles[1387]: Skipping /boot Dec 13 08:47:34.047218 zram_generator::config[1415]: No configuration found. Dec 13 08:47:34.265975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:47:34.375123 systemd[1]: Reloading finished in 448 ms. Dec 13 08:47:34.405264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:47:34.421736 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:34.429524 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:47:34.443173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:47:34.461331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:47:34.469515 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:47:34.489544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.490942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:34.498668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:34.515695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:34.535841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:34.544648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:34.544887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.548524 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:47:34.555477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:34.555751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:34.559709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:34.559939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:34.561923 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:34.562124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:34.580854 augenrules[1497]: No rules Dec 13 08:47:34.583012 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:34.592284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.593003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:34.602557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:34.612512 systemd-networkd[1219]: eth0: Gained IPv6LL Dec 13 08:47:34.612859 systemd-networkd[1219]: eth1: Gained IPv6LL Dec 13 08:47:34.620122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:34.630559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:34.634280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:34.659624 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:47:34.663216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.665851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:47:34.672749 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:47:34.676339 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:47:34.679618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:34.679925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:34.683437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:34.683887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:34.688634 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:34.691497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:34.714401 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:47:34.723168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.723809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:47:34.733443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:47:34.746010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:47:34.750780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:47:34.768212 systemd-resolved[1476]: Positive Trust Anchors: Dec 13 08:47:34.768226 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:47:34.768268 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:47:34.773539 systemd-resolved[1476]: Using system hostname 'ci-4081.2.1-b-2d211b5e28'. Dec 13 08:47:34.778537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:47:34.786177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:47:34.786320 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:47:34.786360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 08:47:34.787106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:47:34.793240 systemd[1]: Finished ensure-sysext.service. Dec 13 08:47:34.796800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:47:34.797131 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:47:34.799771 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:47:34.800081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:47:34.802626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:47:34.802959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:47:34.806519 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:47:34.806960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:47:34.821284 systemd[1]: Reached target network.target - Network. Dec 13 08:47:34.824145 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:47:34.824949 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:47:34.827095 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:47:34.827314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:47:34.837647 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:47:34.924525 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:47:34.927129 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:47:34.929243 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:47:34.930628 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:47:34.932252 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:47:34.933137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:47:34.933195 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:47:34.933883 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:47:34.934845 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:47:34.935688 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:47:34.936282 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:47:34.940513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:47:34.947837 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:47:34.955976 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:47:34.958885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:47:34.961750 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:47:34.964790 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:47:34.965892 systemd[1]: System is tainted: cgroupsv1 Dec 13 08:47:34.965985 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:47:34.966025 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:47:34.977413 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:47:34.984550 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:47:34.996669 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:47:35.002245 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:47:35.007315 systemd-timesyncd[1540]: Contacted time server 172.234.37.140:123 (0.flatcar.pool.ntp.org). Dec 13 08:47:35.007424 systemd-timesyncd[1540]: Initial clock synchronization to Fri 2024-12-13 08:47:35.389119 UTC. Dec 13 08:47:35.021699 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:47:35.027489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:47:35.036748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:35.042681 jq[1549]: false Dec 13 08:47:35.059372 coreos-metadata[1546]: Dec 13 08:47:35.059 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:35.061537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:47:35.071710 coreos-metadata[1546]: Dec 13 08:47:35.071 INFO Fetch successful Dec 13 08:47:35.073577 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:47:35.099673 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:47:35.100918 dbus-daemon[1547]: [system] SELinux support is enabled Dec 13 08:47:35.120546 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:47:35.135595 extend-filesystems[1551]: Found loop4 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found loop5 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found loop6 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found loop7 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda1 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda2 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda3 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found usr Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda4 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda6 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda7 Dec 13 08:47:35.135595 extend-filesystems[1551]: Found vda9 Dec 13 08:47:35.135595 extend-filesystems[1551]: Checking size of /dev/vda9 Dec 13 08:47:35.131363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:47:35.168623 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:47:35.176825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:47:35.200627 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:47:35.210465 extend-filesystems[1551]: Resized partition /dev/vda9 Dec 13 08:47:35.223458 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:47:35.227328 extend-filesystems[1582]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:47:35.246584 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 08:47:35.231993 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:47:35.278860 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:47:35.281483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:47:35.296863 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:47:35.298510 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:47:35.316336 jq[1581]: true Dec 13 08:47:35.318165 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:47:35.342621 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:47:35.343052 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:47:35.359767 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 08:47:35.393912 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 08:47:35.393912 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 08:47:35.393912 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 08:47:35.399110 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Dec 13 08:47:35.399110 extend-filesystems[1551]: Found vdb Dec 13 08:47:35.397029 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:47:35.409718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:47:35.420142 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:47:35.437793 update_engine[1580]: I20241213 08:47:35.431142 1580 main.cc:92] Flatcar Update Engine starting Dec 13 08:47:35.469707 update_engine[1580]: I20241213 08:47:35.469281 1580 update_check_scheduler.cc:74] Next update check in 7m48s Dec 13 08:47:35.474419 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:47:35.487698 tar[1594]: linux-amd64/helm Dec 13 08:47:35.487006 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:47:35.487176 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:47:35.487243 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:47:35.489620 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:47:35.489832 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 08:47:35.489869 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:47:35.496860 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:47:35.502090 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:47:35.513114 jq[1595]: true Dec 13 08:47:35.514580 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:47:35.537280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1604) Dec 13 08:47:35.897233 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:47:35.896732 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:47:35.917618 systemd[1]: Starting sshkeys.service... Dec 13 08:47:35.931772 systemd-logind[1573]: New seat seat0. Dec 13 08:47:35.950542 systemd-logind[1573]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 08:47:35.950586 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 08:47:35.953858 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:47:36.021172 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:47:36.039782 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:47:36.096020 containerd[1596]: time="2024-12-13T08:47:36.095853601Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:47:36.205641 containerd[1596]: time="2024-12-13T08:47:36.204659493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.209442 coreos-metadata[1653]: Dec 13 08:47:36.206 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213463860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213537363Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213572259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213835836Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213862705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213951757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:36.214329 containerd[1596]: time="2024-12-13T08:47:36.213971611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214412182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214442963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214466820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214483892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214624619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.215644 containerd[1596]: time="2024-12-13T08:47:36.214959984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:47:36.220264 containerd[1596]: time="2024-12-13T08:47:36.219826434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:47:36.220264 containerd[1596]: time="2024-12-13T08:47:36.219889869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:47:36.220264 containerd[1596]: time="2024-12-13T08:47:36.220156525Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:47:36.220264 containerd[1596]: time="2024-12-13T08:47:36.220270999Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:47:36.221302 coreos-metadata[1653]: Dec 13 08:47:36.221 INFO Fetch successful Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.235542185Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.235698762Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.235788848Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.235818331Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.235848541Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:47:36.236261 containerd[1596]: time="2024-12-13T08:47:36.236149614Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:47:36.237729 containerd[1596]: time="2024-12-13T08:47:36.237673728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.237982119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238020952Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238047176Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238075707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238098913Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238120101Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238144144Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238168720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.238294 containerd[1596]: time="2024-12-13T08:47:36.238209993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244307213Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244404441Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244458903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244488481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244514407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244542159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244564548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244588211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244610097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244672574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244700932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244728845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244752592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.246277 containerd[1596]: time="2024-12-13T08:47:36.244776466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.244801242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.244831928Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.244872953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.244894767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.244913659Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245015048Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245044646Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245065059Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245085411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245102064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245123931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245147924Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:47:36.247077 containerd[1596]: time="2024-12-13T08:47:36.245170343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:47:36.247681 containerd[1596]: time="2024-12-13T08:47:36.245693305Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:47:36.247681 containerd[1596]: time="2024-12-13T08:47:36.245812698Z" level=info msg="Connect containerd service" Dec 13 08:47:36.247681 containerd[1596]: time="2024-12-13T08:47:36.245885080Z" level=info msg="using legacy CRI server" Dec 13 08:47:36.247681 containerd[1596]: time="2024-12-13T08:47:36.245896721Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:47:36.247681 containerd[1596]: time="2024-12-13T08:47:36.246073282Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:47:36.257347 containerd[1596]: time="2024-12-13T08:47:36.255616086Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:47:36.257347 containerd[1596]: time="2024-12-13T08:47:36.255879167Z" level=info msg="Start subscribing containerd event" Dec 13 08:47:36.257347 containerd[1596]: time="2024-12-13T08:47:36.256004711Z" level=info msg="Start recovering state" Dec 13 08:47:36.256272 unknown[1653]: wrote ssh authorized keys file for user: core Dec 13 08:47:36.259537 containerd[1596]: time="2024-12-13T08:47:36.257865927Z" level=info msg="Start event monitor" Dec 13 08:47:36.259537 containerd[1596]: time="2024-12-13T08:47:36.257917861Z" level=info msg="Start snapshots syncer" Dec 13 08:47:36.259537 containerd[1596]: time="2024-12-13T08:47:36.257938696Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:47:36.259537 containerd[1596]: time="2024-12-13T08:47:36.257952867Z" level=info msg="Start streaming server" Dec 13 08:47:36.265274 containerd[1596]: time="2024-12-13T08:47:36.263704001Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:47:36.265274 containerd[1596]: time="2024-12-13T08:47:36.263846438Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:47:36.265729 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:47:36.273880 containerd[1596]: time="2024-12-13T08:47:36.266320926Z" level=info msg="containerd successfully booted in 0.177292s" Dec 13 08:47:36.272888 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:47:36.329317 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:47:36.324890 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:47:36.337175 systemd[1]: Finished sshkeys.service. Dec 13 08:47:36.635975 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:47:36.737498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:47:36.753601 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:47:36.800286 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:47:36.800672 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:47:36.814405 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:47:36.872115 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:47:36.883899 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:47:36.896909 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 08:47:36.899570 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:47:37.017249 tar[1594]: linux-amd64/LICENSE Dec 13 08:47:37.022356 tar[1594]: linux-amd64/README.md Dec 13 08:47:37.043931 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:47:37.423584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:37.428387 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:47:37.431309 systemd[1]: Startup finished in 7.707s (kernel) + 8.363s (userspace) = 16.071s. Dec 13 08:47:37.437367 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:38.333324 kubelet[1705]: E1213 08:47:38.333205 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:38.337519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:38.337930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:43.357039 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:47:43.363762 systemd[1]: Started sshd@0-143.198.66.7:22-147.75.109.163:47946.service - OpenSSH per-connection server daemon (147.75.109.163:47946). Dec 13 08:47:43.463607 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 47946 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:43.466733 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:43.483866 systemd-logind[1573]: New session 1 of user core. Dec 13 08:47:43.485435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:47:43.502728 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:47:43.526708 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:47:43.537645 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:47:43.543168 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:47:43.684854 systemd[1723]: Queued start job for default target default.target. Dec 13 08:47:43.685319 systemd[1723]: Created slice app.slice - User Application Slice. Dec 13 08:47:43.685342 systemd[1723]: Reached target paths.target - Paths. Dec 13 08:47:43.685356 systemd[1723]: Reached target timers.target - Timers. Dec 13 08:47:43.697375 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:47:43.706382 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:47:43.706636 systemd[1723]: Reached target sockets.target - Sockets. Dec 13 08:47:43.706721 systemd[1723]: Reached target basic.target - Basic System. Dec 13 08:47:43.706827 systemd[1723]: Reached target default.target - Main User Target. Dec 13 08:47:43.706864 systemd[1723]: Startup finished in 155ms. Dec 13 08:47:43.707235 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:47:43.711786 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:47:43.778927 systemd[1]: Started sshd@1-143.198.66.7:22-147.75.109.163:47956.service - OpenSSH per-connection server daemon (147.75.109.163:47956). Dec 13 08:47:43.836578 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 47956 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:43.838635 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:43.845184 systemd-logind[1573]: New session 2 of user core. Dec 13 08:47:43.851572 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:47:43.919939 sshd[1735]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:43.929575 systemd[1]: Started sshd@2-143.198.66.7:22-147.75.109.163:47962.service - OpenSSH per-connection server daemon (147.75.109.163:47962). Dec 13 08:47:43.930338 systemd[1]: sshd@1-143.198.66.7:22-147.75.109.163:47956.service: Deactivated successfully. Dec 13 08:47:43.932712 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:47:43.934026 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:47:43.936474 systemd-logind[1573]: Removed session 2. Dec 13 08:47:43.976416 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 47962 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:43.978330 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:43.984352 systemd-logind[1573]: New session 3 of user core. Dec 13 08:47:43.986486 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:47:44.043566 sshd[1740]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:44.061722 systemd[1]: Started sshd@3-143.198.66.7:22-147.75.109.163:47976.service - OpenSSH per-connection server daemon (147.75.109.163:47976). Dec 13 08:47:44.062601 systemd[1]: sshd@2-143.198.66.7:22-147.75.109.163:47962.service: Deactivated successfully. Dec 13 08:47:44.065658 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:47:44.069487 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:47:44.071484 systemd-logind[1573]: Removed session 3. Dec 13 08:47:44.112422 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 47976 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:44.114718 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:44.122518 systemd-logind[1573]: New session 4 of user core. Dec 13 08:47:44.129661 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:47:44.196584 sshd[1749]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:44.206719 systemd[1]: Started sshd@4-143.198.66.7:22-147.75.109.163:47982.service - OpenSSH per-connection server daemon (147.75.109.163:47982). Dec 13 08:47:44.207577 systemd[1]: sshd@3-143.198.66.7:22-147.75.109.163:47976.service: Deactivated successfully. Dec 13 08:47:44.217118 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:47:44.218160 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:47:44.221365 systemd-logind[1573]: Removed session 4. Dec 13 08:47:44.254231 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 47982 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:44.256510 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:44.263709 systemd-logind[1573]: New session 5 of user core. Dec 13 08:47:44.269678 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:47:44.342783 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:47:44.343132 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:44.357070 sudo[1763]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:44.362136 sshd[1756]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:44.377413 systemd[1]: Started sshd@5-143.198.66.7:22-147.75.109.163:47992.service - OpenSSH per-connection server daemon (147.75.109.163:47992). Dec 13 08:47:44.378388 systemd[1]: sshd@4-143.198.66.7:22-147.75.109.163:47982.service: Deactivated successfully. Dec 13 08:47:44.384529 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:47:44.388468 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:47:44.390241 systemd-logind[1573]: Removed session 5. Dec 13 08:47:44.423520 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 47992 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:44.426145 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:44.436296 systemd-logind[1573]: New session 6 of user core. Dec 13 08:47:44.445692 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:47:44.510471 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:47:44.511670 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:44.518004 sudo[1773]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:44.526835 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:47:44.527812 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:44.549560 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:44.551987 auditctl[1776]: No rules Dec 13 08:47:44.553135 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:47:44.553515 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:44.558521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:47:44.598779 augenrules[1795]: No rules Dec 13 08:47:44.600270 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:47:44.602294 sudo[1772]: pam_unix(sudo:session): session closed for user root Dec 13 08:47:44.608443 sshd[1765]: pam_unix(sshd:session): session closed for user core Dec 13 08:47:44.616818 systemd[1]: Started sshd@6-143.198.66.7:22-147.75.109.163:48008.service - OpenSSH per-connection server daemon (147.75.109.163:48008). Dec 13 08:47:44.620916 systemd[1]: sshd@5-143.198.66.7:22-147.75.109.163:47992.service: Deactivated successfully. Dec 13 08:47:44.626930 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:47:44.628685 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:47:44.632067 systemd-logind[1573]: Removed session 6. Dec 13 08:47:44.668391 sshd[1801]: Accepted publickey for core from 147.75.109.163 port 48008 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:47:44.671358 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:47:44.678791 systemd-logind[1573]: New session 7 of user core. Dec 13 08:47:44.689357 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:47:44.755590 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:47:44.756136 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:47:45.252555 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:47:45.253857 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:47:45.739946 dockerd[1825]: time="2024-12-13T08:47:45.739584575Z" level=info msg="Starting up" Dec 13 08:47:45.988605 systemd[1]: var-lib-docker-metacopy\x2dcheck2329770612-merged.mount: Deactivated successfully. Dec 13 08:47:46.010679 dockerd[1825]: time="2024-12-13T08:47:46.009952785Z" level=info msg="Loading containers: start." Dec 13 08:47:46.243237 kernel: Initializing XFRM netlink socket Dec 13 08:47:46.413159 systemd-networkd[1219]: docker0: Link UP Dec 13 08:47:46.500831 dockerd[1825]: time="2024-12-13T08:47:46.500768164Z" level=info msg="Loading containers: done." Dec 13 08:47:46.532543 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3506313321-merged.mount: Deactivated successfully. Dec 13 08:47:46.534036 dockerd[1825]: time="2024-12-13T08:47:46.533980219Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:47:46.534145 dockerd[1825]: time="2024-12-13T08:47:46.534108249Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:47:46.534369 dockerd[1825]: time="2024-12-13T08:47:46.534248178Z" level=info msg="Daemon has completed initialization" Dec 13 08:47:46.588080 dockerd[1825]: time="2024-12-13T08:47:46.587997032Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:47:46.588506 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:47:47.629497 containerd[1596]: time="2024-12-13T08:47:47.629332580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 08:47:48.275154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156538944.mount: Deactivated successfully. Dec 13 08:47:48.588233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:47:48.596053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:48.849997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:48.865852 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:48.970891 kubelet[2035]: E1213 08:47:48.970718 2035 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:48.977923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:48.980295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:49.965245 containerd[1596]: time="2024-12-13T08:47:49.965152796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:49.967578 containerd[1596]: time="2024-12-13T08:47:49.967475665Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 08:47:49.969475 containerd[1596]: time="2024-12-13T08:47:49.969374124Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:49.975923 containerd[1596]: time="2024-12-13T08:47:49.975860231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:49.979184 containerd[1596]: time="2024-12-13T08:47:49.978899842Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.349472983s" Dec 13 08:47:49.979184 containerd[1596]: time="2024-12-13T08:47:49.978978290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 08:47:50.017074 containerd[1596]: time="2024-12-13T08:47:50.016734811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 08:47:51.889101 containerd[1596]: time="2024-12-13T08:47:51.888985730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:51.890562 containerd[1596]: time="2024-12-13T08:47:51.890420605Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 08:47:51.892758 containerd[1596]: time="2024-12-13T08:47:51.892667471Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:51.901717 containerd[1596]: time="2024-12-13T08:47:51.901604222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:51.904074 containerd[1596]: time="2024-12-13T08:47:51.904017348Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.887230187s" Dec 13 08:47:51.904421 containerd[1596]: time="2024-12-13T08:47:51.904289399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 08:47:51.953364 containerd[1596]: time="2024-12-13T08:47:51.953252861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 08:47:53.130959 containerd[1596]: time="2024-12-13T08:47:53.130891386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:53.134155 containerd[1596]: time="2024-12-13T08:47:53.134079482Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 08:47:53.137212 containerd[1596]: time="2024-12-13T08:47:53.136518284Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:53.143496 containerd[1596]: time="2024-12-13T08:47:53.143437875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:53.146139 containerd[1596]: time="2024-12-13T08:47:53.146079201Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.192772929s" Dec 13 08:47:53.146379 containerd[1596]: time="2024-12-13T08:47:53.146347870Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 08:47:53.181447 containerd[1596]: time="2024-12-13T08:47:53.181391662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 08:47:54.397131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649060487.mount: Deactivated successfully. Dec 13 08:47:54.975519 containerd[1596]: time="2024-12-13T08:47:54.975439036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:54.976982 containerd[1596]: time="2024-12-13T08:47:54.976916724Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 08:47:55.000778 containerd[1596]: time="2024-12-13T08:47:55.000665721Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:55.006337 containerd[1596]: time="2024-12-13T08:47:55.006238594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:55.007626 containerd[1596]: time="2024-12-13T08:47:55.006964792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.825510457s" Dec 13 08:47:55.007626 containerd[1596]: time="2024-12-13T08:47:55.007061983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 08:47:55.044714 containerd[1596]: time="2024-12-13T08:47:55.044458908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:47:55.047329 systemd-resolved[1476]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 08:47:55.521883 systemd[1]: Started sshd@7-143.198.66.7:22-207.154.236.92:49260.service - OpenSSH per-connection server daemon (207.154.236.92:49260). Dec 13 08:47:55.561511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957108422.mount: Deactivated successfully. Dec 13 08:47:56.458662 sshd[2083]: Invalid user doz from 207.154.236.92 port 49260 Dec 13 08:47:56.631780 sshd[2083]: Received disconnect from 207.154.236.92 port 49260:11: Bye Bye [preauth] Dec 13 08:47:56.631780 sshd[2083]: Disconnected from invalid user doz 207.154.236.92 port 49260 [preauth] Dec 13 08:47:56.633859 systemd[1]: sshd@7-143.198.66.7:22-207.154.236.92:49260.service: Deactivated successfully. Dec 13 08:47:56.703412 containerd[1596]: time="2024-12-13T08:47:56.703341346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:56.705475 containerd[1596]: time="2024-12-13T08:47:56.705392488Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 08:47:56.706826 containerd[1596]: time="2024-12-13T08:47:56.706756749Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:56.712351 containerd[1596]: time="2024-12-13T08:47:56.711670595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:56.714483 containerd[1596]: time="2024-12-13T08:47:56.714286618Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.669772641s" Dec 13 08:47:56.714483 containerd[1596]: time="2024-12-13T08:47:56.714342859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 08:47:56.748894 containerd[1596]: time="2024-12-13T08:47:56.748844308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:47:57.296639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125596259.mount: Deactivated successfully. Dec 13 08:47:57.306147 containerd[1596]: time="2024-12-13T08:47:57.304867937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:57.307772 containerd[1596]: time="2024-12-13T08:47:57.307715348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 08:47:57.309987 containerd[1596]: time="2024-12-13T08:47:57.309941483Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:57.315219 containerd[1596]: time="2024-12-13T08:47:57.315146396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:57.316991 containerd[1596]: time="2024-12-13T08:47:57.316366067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 567.259129ms" Dec 13 08:47:57.317254 containerd[1596]: time="2024-12-13T08:47:57.317208434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 08:47:57.348752 containerd[1596]: time="2024-12-13T08:47:57.348703821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 08:47:57.879358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288308067.mount: Deactivated successfully. Dec 13 08:47:58.101389 systemd-resolved[1476]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 08:47:59.228542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 08:47:59.235465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:47:59.453684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:47:59.468133 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:47:59.569096 kubelet[2200]: E1213 08:47:59.568282 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:47:59.573817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:47:59.574386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:47:59.724554 containerd[1596]: time="2024-12-13T08:47:59.724474857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:59.726361 containerd[1596]: time="2024-12-13T08:47:59.726284552Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 08:47:59.727592 containerd[1596]: time="2024-12-13T08:47:59.727499170Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:59.733206 containerd[1596]: time="2024-12-13T08:47:59.733101377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:47:59.735630 containerd[1596]: time="2024-12-13T08:47:59.735380651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.386628366s" Dec 13 08:47:59.735630 containerd[1596]: time="2024-12-13T08:47:59.735443806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 08:48:03.041459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:03.053717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:03.098170 systemd[1]: Reloading requested from client PID 2275 ('systemctl') (unit session-7.scope)... Dec 13 08:48:03.098209 systemd[1]: Reloading... Dec 13 08:48:03.242221 zram_generator::config[2311]: No configuration found. Dec 13 08:48:03.457406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:03.565076 systemd[1]: Reloading finished in 466 ms. Dec 13 08:48:03.624767 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:48:03.624895 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:48:03.625297 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:03.633643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:03.780537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:03.794934 (kubelet)[2380]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:48:03.872677 kubelet[2380]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:03.872677 kubelet[2380]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:48:03.872677 kubelet[2380]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:03.873346 kubelet[2380]: I1213 08:48:03.872727 2380 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:48:04.228850 kubelet[2380]: I1213 08:48:04.228250 2380 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:48:04.228850 kubelet[2380]: I1213 08:48:04.228315 2380 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:48:04.228850 kubelet[2380]: I1213 08:48:04.228661 2380 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:48:04.259667 kubelet[2380]: E1213 08:48:04.259527 2380 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.66.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.263505 kubelet[2380]: I1213 08:48:04.263283 2380 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:04.284039 kubelet[2380]: I1213 08:48:04.283996 2380 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:48:04.284554 kubelet[2380]: I1213 08:48:04.284521 2380 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:48:04.285787 kubelet[2380]: I1213 08:48:04.285716 2380 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:48:04.285787 kubelet[2380]: I1213 08:48:04.285782 2380 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:48:04.285787 kubelet[2380]: I1213 08:48:04.285798 2380 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:48:04.286127 kubelet[2380]: I1213 08:48:04.285969 2380 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:04.286127 kubelet[2380]: I1213 08:48:04.286107 2380 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:48:04.286127 kubelet[2380]: I1213 08:48:04.286124 2380 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:48:04.286969 kubelet[2380]: W1213 08:48:04.286873 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.66.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-2d211b5e28&limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.286969 kubelet[2380]: E1213 08:48:04.286944 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.66.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-2d211b5e28&limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.287142 kubelet[2380]: I1213 08:48:04.287111 2380 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:48:04.287207 kubelet[2380]: I1213 08:48:04.287154 2380 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:48:04.288816 kubelet[2380]: W1213 08:48:04.288768 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.66.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.288816 kubelet[2380]: E1213 08:48:04.288823 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.66.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.289452 kubelet[2380]: I1213 08:48:04.289376 2380 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:48:04.295470 kubelet[2380]: I1213 08:48:04.294974 2380 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:48:04.295470 kubelet[2380]: W1213 08:48:04.295159 2380 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:48:04.296305 kubelet[2380]: I1213 08:48:04.295938 2380 server.go:1256] "Started kubelet" Dec 13 08:48:04.296499 kubelet[2380]: I1213 08:48:04.296350 2380 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:48:04.297433 kubelet[2380]: I1213 08:48:04.297364 2380 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:48:04.298012 kubelet[2380]: I1213 08:48:04.297862 2380 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:48:04.298643 kubelet[2380]: I1213 08:48:04.298604 2380 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:48:04.299926 kubelet[2380]: I1213 08:48:04.298783 2380 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:48:04.305867 kubelet[2380]: I1213 08:48:04.305827 2380 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:48:04.308611 kubelet[2380]: E1213 08:48:04.308269 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.66.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-2d211b5e28?timeout=10s\": dial tcp 143.198.66.7:6443: connect: connection refused" interval="200ms" Dec 13 08:48:04.312258 kubelet[2380]: E1213 08:48:04.311387 2380 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.66.7:6443/api/v1/namespaces/default/events\": dial tcp 143.198.66.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-b-2d211b5e28.1810b0490aadd174 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-b-2d211b5e28,UID:ci-4081.2.1-b-2d211b5e28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-b-2d211b5e28,},FirstTimestamp:2024-12-13 08:48:04.295905652 +0000 UTC m=+0.494469374,LastTimestamp:2024-12-13 08:48:04.295905652 +0000 UTC m=+0.494469374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-b-2d211b5e28,}" Dec 13 08:48:04.312258 kubelet[2380]: I1213 08:48:04.311832 2380 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:48:04.312258 kubelet[2380]: I1213 08:48:04.311914 2380 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:48:04.313925 kubelet[2380]: I1213 08:48:04.313540 2380 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:48:04.317477 kubelet[2380]: I1213 08:48:04.317445 2380 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:48:04.318137 kubelet[2380]: W1213 08:48:04.318088 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.66.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.318332 kubelet[2380]: E1213 08:48:04.318318 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.66.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.318943 kubelet[2380]: I1213 08:48:04.318925 2380 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:48:04.325647 kubelet[2380]: I1213 08:48:04.325417 2380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:48:04.328250 kubelet[2380]: I1213 08:48:04.326660 2380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:48:04.328250 kubelet[2380]: I1213 08:48:04.326698 2380 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:48:04.328250 kubelet[2380]: I1213 08:48:04.326725 2380 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:48:04.328250 kubelet[2380]: E1213 08:48:04.326786 2380 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:48:04.336528 kubelet[2380]: W1213 08:48:04.336469 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.66.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.336642 kubelet[2380]: E1213 08:48:04.336539 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.66.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:04.358163 kubelet[2380]: I1213 08:48:04.358134 2380 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:48:04.358709 kubelet[2380]: I1213 08:48:04.358418 2380 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:48:04.358709 kubelet[2380]: I1213 08:48:04.358445 2380 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:04.360601 kubelet[2380]: I1213 08:48:04.360522 2380 policy_none.go:49] "None policy: Start" Dec 13 08:48:04.361371 kubelet[2380]: I1213 08:48:04.361332 2380 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:48:04.361874 kubelet[2380]: I1213 08:48:04.361518 2380 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:48:04.367635 kubelet[2380]: I1213 08:48:04.367604 2380 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:48:04.369209 kubelet[2380]: I1213 08:48:04.368126 2380 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:48:04.377595 kubelet[2380]: E1213 08:48:04.377568 2380 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-b-2d211b5e28\" not found" Dec 13 08:48:04.409598 kubelet[2380]: I1213 08:48:04.409552 2380 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.409980 kubelet[2380]: E1213 08:48:04.409961 2380 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.66.7:6443/api/v1/nodes\": dial tcp 143.198.66.7:6443: connect: connection refused" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.428234 kubelet[2380]: I1213 08:48:04.427977 2380 topology_manager.go:215] "Topology Admit Handler" podUID="2547c78291367166bfcbe6e6cd5562e7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.430329 kubelet[2380]: I1213 08:48:04.429977 2380 topology_manager.go:215] "Topology Admit Handler" podUID="6b5c267e6e630840d00560945d7a4eb1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.431506 kubelet[2380]: I1213 08:48:04.431482 2380 topology_manager.go:215] "Topology Admit Handler" podUID="8a649e2851793cd72cba3aada4b02472" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.510069 kubelet[2380]: E1213 08:48:04.509923 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.66.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-2d211b5e28?timeout=10s\": dial tcp 143.198.66.7:6443: connect: connection refused" interval="400ms" Dec 13 08:48:04.612277 kubelet[2380]: I1213 08:48:04.612231 2380 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.612826 kubelet[2380]: E1213 08:48:04.612749 2380 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.66.7:6443/api/v1/nodes\": dial tcp 143.198.66.7:6443: connect: connection refused" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.616301 kubelet[2380]: I1213 08:48:04.616193 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.618668 kubelet[2380]: I1213 08:48:04.618569 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.618668 kubelet[2380]: I1213 08:48:04.618633 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.618668 kubelet[2380]: I1213 08:48:04.618708 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.618668 kubelet[2380]: I1213 08:48:04.618755 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.618668 kubelet[2380]: I1213 08:48:04.618790 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.619292 kubelet[2380]: I1213 08:48:04.618822 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a649e2851793cd72cba3aada4b02472-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-b-2d211b5e28\" (UID: \"8a649e2851793cd72cba3aada4b02472\") " pod="kube-system/kube-scheduler-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.619292 kubelet[2380]: I1213 08:48:04.618857 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.619292 kubelet[2380]: I1213 08:48:04.618916 2380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:04.736015 kubelet[2380]: E1213 08:48:04.735550 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:04.736760 containerd[1596]: time="2024-12-13T08:48:04.736711718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-b-2d211b5e28,Uid:2547c78291367166bfcbe6e6cd5562e7,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:04.739124 kubelet[2380]: E1213 08:48:04.738792 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:04.739124 kubelet[2380]: E1213 08:48:04.738885 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:04.744604 containerd[1596]: time="2024-12-13T08:48:04.744182248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-b-2d211b5e28,Uid:6b5c267e6e630840d00560945d7a4eb1,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:04.744833 containerd[1596]: time="2024-12-13T08:48:04.744219301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-b-2d211b5e28,Uid:8a649e2851793cd72cba3aada4b02472,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:04.745958 systemd-resolved[1476]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 13 08:48:04.910771 kubelet[2380]: E1213 08:48:04.910601 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.66.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-2d211b5e28?timeout=10s\": dial tcp 143.198.66.7:6443: connect: connection refused" interval="800ms" Dec 13 08:48:05.015384 kubelet[2380]: I1213 08:48:05.014895 2380 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:05.015584 kubelet[2380]: E1213 08:48:05.015562 2380 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.66.7:6443/api/v1/nodes\": dial tcp 143.198.66.7:6443: connect: connection refused" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:05.248083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122780084.mount: Deactivated successfully. Dec 13 08:48:05.260396 containerd[1596]: time="2024-12-13T08:48:05.260322950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:05.262066 containerd[1596]: time="2024-12-13T08:48:05.261984371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 08:48:05.263930 containerd[1596]: time="2024-12-13T08:48:05.263874519Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:05.267864 containerd[1596]: time="2024-12-13T08:48:05.267586142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:48:05.267864 containerd[1596]: time="2024-12-13T08:48:05.267745460Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:05.270941 containerd[1596]: time="2024-12-13T08:48:05.270226427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:05.271851 containerd[1596]: time="2024-12-13T08:48:05.271797619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.463356ms" Dec 13 08:48:05.274214 containerd[1596]: time="2024-12-13T08:48:05.272607393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:48:05.275394 containerd[1596]: time="2024-12-13T08:48:05.275351326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:48:05.280177 containerd[1596]: time="2024-12-13T08:48:05.280127012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.283788ms" Dec 13 08:48:05.294262 containerd[1596]: time="2024-12-13T08:48:05.294181656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.26114ms" Dec 13 08:48:05.413572 kubelet[2380]: W1213 08:48:05.412369 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://143.198.66.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.413572 kubelet[2380]: E1213 08:48:05.412444 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.66.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.465959 kubelet[2380]: W1213 08:48:05.464603 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://143.198.66.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.465959 kubelet[2380]: E1213 08:48:05.464664 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.66.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.495580 containerd[1596]: time="2024-12-13T08:48:05.495438220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:05.496123 containerd[1596]: time="2024-12-13T08:48:05.495536334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:05.496123 containerd[1596]: time="2024-12-13T08:48:05.496094193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.496396 containerd[1596]: time="2024-12-13T08:48:05.496356034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.500209 containerd[1596]: time="2024-12-13T08:48:05.499865593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:05.500209 containerd[1596]: time="2024-12-13T08:48:05.500014921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:05.502924 containerd[1596]: time="2024-12-13T08:48:05.501394608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:05.502924 containerd[1596]: time="2024-12-13T08:48:05.502312630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:05.502924 containerd[1596]: time="2024-12-13T08:48:05.502327419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.502924 containerd[1596]: time="2024-12-13T08:48:05.502433354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.502924 containerd[1596]: time="2024-12-13T08:48:05.501466273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.503174 containerd[1596]: time="2024-12-13T08:48:05.503144996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:05.642616 containerd[1596]: time="2024-12-13T08:48:05.642477369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-b-2d211b5e28,Uid:2547c78291367166bfcbe6e6cd5562e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bae3ed764f28436b3a92e965a500680a7d8b615574e96e6685621be8f6f131\"" Dec 13 08:48:05.646671 containerd[1596]: time="2024-12-13T08:48:05.646630210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-b-2d211b5e28,Uid:8a649e2851793cd72cba3aada4b02472,Namespace:kube-system,Attempt:0,} returns sandbox id \"4887064e66608f2b2f5882a526fa2143245108eaabeda9f145c677d4d9ed8fe2\"" Dec 13 08:48:05.647890 kubelet[2380]: E1213 08:48:05.647852 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:05.648245 kubelet[2380]: E1213 08:48:05.648207 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:05.653958 containerd[1596]: time="2024-12-13T08:48:05.653912971Z" level=info msg="CreateContainer within sandbox \"a1bae3ed764f28436b3a92e965a500680a7d8b615574e96e6685621be8f6f131\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:48:05.654446 containerd[1596]: time="2024-12-13T08:48:05.654419683Z" level=info msg="CreateContainer within sandbox \"4887064e66608f2b2f5882a526fa2143245108eaabeda9f145c677d4d9ed8fe2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:48:05.658732 containerd[1596]: time="2024-12-13T08:48:05.658679670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-b-2d211b5e28,Uid:6b5c267e6e630840d00560945d7a4eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9f21f9af8463823e7ce52a4d1afd4f9e1c6ddec6305ba4e298ca2eb7f506e8\"" Dec 13 08:48:05.659992 kubelet[2380]: E1213 08:48:05.659971 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:05.662901 containerd[1596]: time="2024-12-13T08:48:05.662756132Z" level=info msg="CreateContainer within sandbox \"8c9f21f9af8463823e7ce52a4d1afd4f9e1c6ddec6305ba4e298ca2eb7f506e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:48:05.690048 containerd[1596]: time="2024-12-13T08:48:05.689851742Z" level=info msg="CreateContainer within sandbox \"4887064e66608f2b2f5882a526fa2143245108eaabeda9f145c677d4d9ed8fe2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"978eb8b3f125d6a5c63b0bb5ac164e93abb8da7c8ff8111d2e062df16846db21\"" Dec 13 08:48:05.690830 containerd[1596]: time="2024-12-13T08:48:05.690787400Z" level=info msg="StartContainer for \"978eb8b3f125d6a5c63b0bb5ac164e93abb8da7c8ff8111d2e062df16846db21\"" Dec 13 08:48:05.697628 containerd[1596]: time="2024-12-13T08:48:05.697487210Z" level=info msg="CreateContainer within sandbox \"8c9f21f9af8463823e7ce52a4d1afd4f9e1c6ddec6305ba4e298ca2eb7f506e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"50a8b24decf149e746b8741d6b7ea3fa7a2f77093fb9ba61ffd901fcc14bb3cb\"" Dec 13 08:48:05.698740 containerd[1596]: time="2024-12-13T08:48:05.698345499Z" level=info msg="StartContainer for \"50a8b24decf149e746b8741d6b7ea3fa7a2f77093fb9ba61ffd901fcc14bb3cb\"" Dec 13 08:48:05.699327 containerd[1596]: time="2024-12-13T08:48:05.699143117Z" level=info msg="CreateContainer within sandbox \"a1bae3ed764f28436b3a92e965a500680a7d8b615574e96e6685621be8f6f131\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4195d84cf587424ed0fc9af2c04272db6b899485570452992a69338caa3348fa\"" Dec 13 08:48:05.700234 containerd[1596]: time="2024-12-13T08:48:05.699857362Z" level=info msg="StartContainer for \"4195d84cf587424ed0fc9af2c04272db6b899485570452992a69338caa3348fa\"" Dec 13 08:48:05.711739 kubelet[2380]: E1213 08:48:05.711269 2380 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.66.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-2d211b5e28?timeout=10s\": dial tcp 143.198.66.7:6443: connect: connection refused" interval="1.6s" Dec 13 08:48:05.819316 kubelet[2380]: I1213 08:48:05.817921 2380 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:05.819316 kubelet[2380]: E1213 08:48:05.818280 2380 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.66.7:6443/api/v1/nodes\": dial tcp 143.198.66.7:6443: connect: connection refused" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:05.835356 containerd[1596]: time="2024-12-13T08:48:05.835304696Z" level=info msg="StartContainer for \"50a8b24decf149e746b8741d6b7ea3fa7a2f77093fb9ba61ffd901fcc14bb3cb\" returns successfully" Dec 13 08:48:05.853464 kubelet[2380]: W1213 08:48:05.853393 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://143.198.66.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-2d211b5e28&limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.853464 kubelet[2380]: E1213 08:48:05.853457 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.66.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-2d211b5e28&limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.856272 containerd[1596]: time="2024-12-13T08:48:05.856223274Z" level=info msg="StartContainer for \"978eb8b3f125d6a5c63b0bb5ac164e93abb8da7c8ff8111d2e062df16846db21\" returns successfully" Dec 13 08:48:05.856888 containerd[1596]: time="2024-12-13T08:48:05.856794778Z" level=info msg="StartContainer for \"4195d84cf587424ed0fc9af2c04272db6b899485570452992a69338caa3348fa\" returns successfully" Dec 13 08:48:05.910009 kubelet[2380]: W1213 08:48:05.909934 2380 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://143.198.66.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:05.910009 kubelet[2380]: E1213 08:48:05.910017 2380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.66.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.66.7:6443: connect: connection refused Dec 13 08:48:06.373056 kubelet[2380]: E1213 08:48:06.373015 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:06.391451 kubelet[2380]: E1213 08:48:06.391406 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:06.409224 kubelet[2380]: E1213 08:48:06.406353 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:07.396771 kubelet[2380]: E1213 08:48:07.396731 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:07.399771 kubelet[2380]: E1213 08:48:07.399409 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:07.421497 kubelet[2380]: I1213 08:48:07.421466 2380 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:08.117840 kubelet[2380]: E1213 08:48:08.117783 2380 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-b-2d211b5e28\" not found" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:08.180805 kubelet[2380]: I1213 08:48:08.180254 2380 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:08.289370 kubelet[2380]: I1213 08:48:08.288374 2380 apiserver.go:52] "Watching apiserver" Dec 13 08:48:08.318197 kubelet[2380]: I1213 08:48:08.318120 2380 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:48:10.638142 kubelet[2380]: W1213 08:48:10.638103 2380 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:10.640240 kubelet[2380]: E1213 08:48:10.638883 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:11.406130 kubelet[2380]: E1213 08:48:11.406051 2380 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:11.457975 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Dec 13 08:48:11.458627 systemd[1]: Reloading... Dec 13 08:48:11.577289 zram_generator::config[2695]: No configuration found. Dec 13 08:48:11.825208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:48:11.949855 systemd[1]: Reloading finished in 490 ms. Dec 13 08:48:12.004047 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:12.004356 kubelet[2380]: I1213 08:48:12.004159 2380 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:12.016701 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:48:12.017207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:12.028563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:48:12.199624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:48:12.217963 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:48:12.320304 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:12.320304 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:48:12.320304 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:48:12.320963 kubelet[2753]: I1213 08:48:12.320309 2753 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:48:12.329517 kubelet[2753]: I1213 08:48:12.329480 2753 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:48:12.329517 kubelet[2753]: I1213 08:48:12.329513 2753 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:48:12.329791 kubelet[2753]: I1213 08:48:12.329771 2753 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:48:12.332751 kubelet[2753]: I1213 08:48:12.332443 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:48:12.335558 kubelet[2753]: I1213 08:48:12.335094 2753 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.349746 2753 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.350496 2753 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.350854 2753 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.350886 2753 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.350902 2753 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:48:12.352048 kubelet[2753]: I1213 08:48:12.350956 2753 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:12.352639 kubelet[2753]: I1213 08:48:12.351077 2753 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:48:12.352639 kubelet[2753]: I1213 08:48:12.351094 2753 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:48:12.352639 kubelet[2753]: I1213 08:48:12.351125 2753 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:48:12.352639 kubelet[2753]: I1213 08:48:12.351145 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:48:12.358691 sudo[2767]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 08:48:12.359930 sudo[2767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 08:48:12.364471 kubelet[2753]: I1213 08:48:12.363693 2753 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:48:12.364471 kubelet[2753]: I1213 08:48:12.363955 2753 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:48:12.364577 kubelet[2753]: I1213 08:48:12.364483 2753 server.go:1256] "Started kubelet" Dec 13 08:48:12.371404 kubelet[2753]: I1213 08:48:12.367582 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:48:12.380235 kubelet[2753]: I1213 08:48:12.380123 2753 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:48:12.381367 kubelet[2753]: I1213 08:48:12.381336 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:48:12.393162 kubelet[2753]: I1213 08:48:12.389876 2753 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:48:12.393162 kubelet[2753]: I1213 08:48:12.386991 2753 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:48:12.393162 kubelet[2753]: I1213 08:48:12.387018 2753 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:48:12.393162 kubelet[2753]: I1213 08:48:12.392459 2753 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:48:12.402579 kubelet[2753]: I1213 08:48:12.402040 2753 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:48:12.413237 kubelet[2753]: I1213 08:48:12.412703 2753 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:48:12.413237 kubelet[2753]: I1213 08:48:12.412811 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:48:12.420933 kubelet[2753]: I1213 08:48:12.419833 2753 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:48:12.424128 kubelet[2753]: E1213 08:48:12.424099 2753 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:48:12.446029 kubelet[2753]: I1213 08:48:12.445994 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:48:12.449356 kubelet[2753]: I1213 08:48:12.449308 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:48:12.449356 kubelet[2753]: I1213 08:48:12.449347 2753 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:48:12.449356 kubelet[2753]: I1213 08:48:12.449369 2753 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:48:12.449912 kubelet[2753]: E1213 08:48:12.449429 2753 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:48:12.496545 kubelet[2753]: I1213 08:48:12.495624 2753 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.517098 kubelet[2753]: I1213 08:48:12.517052 2753 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.517312 kubelet[2753]: I1213 08:48:12.517255 2753 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.550222 kubelet[2753]: E1213 08:48:12.549592 2753 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 08:48:12.580645 kubelet[2753]: I1213 08:48:12.580611 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:48:12.580645 kubelet[2753]: I1213 08:48:12.580642 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:48:12.580645 kubelet[2753]: I1213 08:48:12.580665 2753 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:48:12.580931 kubelet[2753]: I1213 08:48:12.580832 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:48:12.580931 kubelet[2753]: I1213 08:48:12.580857 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:48:12.580931 kubelet[2753]: I1213 08:48:12.580866 2753 policy_none.go:49] "None policy: Start" Dec 13 08:48:12.583819 kubelet[2753]: I1213 08:48:12.583079 2753 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:48:12.583819 kubelet[2753]: I1213 08:48:12.583115 2753 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:48:12.583819 kubelet[2753]: I1213 08:48:12.583391 2753 state_mem.go:75] "Updated machine memory state" Dec 13 08:48:12.585525 kubelet[2753]: I1213 08:48:12.584950 2753 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:48:12.590439 kubelet[2753]: I1213 08:48:12.589749 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:48:12.750964 kubelet[2753]: I1213 08:48:12.750829 2753 topology_manager.go:215] "Topology Admit Handler" podUID="8a649e2851793cd72cba3aada4b02472" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.751147 kubelet[2753]: I1213 08:48:12.751016 2753 topology_manager.go:215] "Topology Admit Handler" podUID="2547c78291367166bfcbe6e6cd5562e7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.751147 kubelet[2753]: I1213 08:48:12.751090 2753 topology_manager.go:215] "Topology Admit Handler" podUID="6b5c267e6e630840d00560945d7a4eb1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.769504 kubelet[2753]: W1213 08:48:12.768487 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:12.769504 kubelet[2753]: W1213 08:48:12.768553 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:12.769798 kubelet[2753]: W1213 08:48:12.769761 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:12.769902 kubelet[2753]: E1213 08:48:12.769881 2753 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.796898 kubelet[2753]: I1213 08:48:12.796848 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.796898 kubelet[2753]: I1213 08:48:12.796904 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797123 kubelet[2753]: I1213 08:48:12.796927 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797123 kubelet[2753]: I1213 08:48:12.796953 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2547c78291367166bfcbe6e6cd5562e7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-b-2d211b5e28\" (UID: \"2547c78291367166bfcbe6e6cd5562e7\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797123 kubelet[2753]: I1213 08:48:12.796976 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797123 kubelet[2753]: I1213 08:48:12.797000 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a649e2851793cd72cba3aada4b02472-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-b-2d211b5e28\" (UID: \"8a649e2851793cd72cba3aada4b02472\") " pod="kube-system/kube-scheduler-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797123 kubelet[2753]: I1213 08:48:12.797019 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797400 kubelet[2753]: I1213 08:48:12.797051 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:12.797400 kubelet[2753]: I1213 08:48:12.797073 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b5c267e6e630840d00560945d7a4eb1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-b-2d211b5e28\" (UID: \"6b5c267e6e630840d00560945d7a4eb1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:13.075691 kubelet[2753]: E1213 08:48:13.075577 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.076032 kubelet[2753]: E1213 08:48:13.076006 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.076411 kubelet[2753]: E1213 08:48:13.076330 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.221350 sudo[2767]: pam_unix(sudo:session): session closed for user root Dec 13 08:48:13.361419 kubelet[2753]: I1213 08:48:13.361054 2753 apiserver.go:52] "Watching apiserver" Dec 13 08:48:13.392563 kubelet[2753]: I1213 08:48:13.392473 2753 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:48:13.517273 kubelet[2753]: E1213 08:48:13.516978 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.518370 kubelet[2753]: E1213 08:48:13.518347 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.529636 kubelet[2753]: W1213 08:48:13.529599 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 08:48:13.529960 kubelet[2753]: E1213 08:48:13.529941 2753 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-b-2d211b5e28\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.1-b-2d211b5e28" Dec 13 08:48:13.532835 kubelet[2753]: E1213 08:48:13.532749 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:13.591653 kubelet[2753]: I1213 08:48:13.591593 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-b-2d211b5e28" podStartSLOduration=3.59153332 podStartE2EDuration="3.59153332s" podCreationTimestamp="2024-12-13 08:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:13.574964257 +0000 UTC m=+1.337643268" watchObservedRunningTime="2024-12-13 08:48:13.59153332 +0000 UTC m=+1.354212304" Dec 13 08:48:13.611776 kubelet[2753]: I1213 08:48:13.611578 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-b-2d211b5e28" podStartSLOduration=1.611511008 podStartE2EDuration="1.611511008s" podCreationTimestamp="2024-12-13 08:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:13.593056492 +0000 UTC m=+1.355735477" watchObservedRunningTime="2024-12-13 08:48:13.611511008 +0000 UTC m=+1.374190187" Dec 13 08:48:14.521286 kubelet[2753]: E1213 08:48:14.519653 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:14.521286 kubelet[2753]: E1213 08:48:14.520299 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:14.997802 sudo[1808]: pam_unix(sudo:session): session closed for user root Dec 13 08:48:15.003170 sshd[1801]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:15.008522 systemd[1]: sshd@6-143.198.66.7:22-147.75.109.163:48008.service: Deactivated successfully. Dec 13 08:48:15.009279 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:48:15.016231 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:48:15.017713 systemd-logind[1573]: Removed session 7. Dec 13 08:48:17.593936 kubelet[2753]: E1213 08:48:17.593834 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:17.615559 kubelet[2753]: I1213 08:48:17.614873 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-b-2d211b5e28" podStartSLOduration=5.614821889 podStartE2EDuration="5.614821889s" podCreationTimestamp="2024-12-13 08:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:13.613895826 +0000 UTC m=+1.376574810" watchObservedRunningTime="2024-12-13 08:48:17.614821889 +0000 UTC m=+5.377500863" Dec 13 08:48:18.527212 kubelet[2753]: E1213 08:48:18.527113 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:18.713665 systemd[1]: Started sshd@8-143.198.66.7:22-161.132.52.23:8188.service - OpenSSH per-connection server daemon (161.132.52.23:8188). Dec 13 08:48:19.513304 sshd[2821]: Invalid user hks from 161.132.52.23 port 8188 Dec 13 08:48:19.648235 sshd[2821]: Received disconnect from 161.132.52.23 port 8188:11: Bye Bye [preauth] Dec 13 08:48:19.648235 sshd[2821]: Disconnected from invalid user hks 161.132.52.23 port 8188 [preauth] Dec 13 08:48:19.651107 systemd[1]: sshd@8-143.198.66.7:22-161.132.52.23:8188.service: Deactivated successfully. Dec 13 08:48:20.360895 update_engine[1580]: I20241213 08:48:20.360670 1580 update_attempter.cc:509] Updating boot flags... Dec 13 08:48:20.405286 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2833) Dec 13 08:48:20.445069 kubelet[2753]: E1213 08:48:20.443815 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:20.500255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2837) Dec 13 08:48:20.538752 kubelet[2753]: E1213 08:48:20.537380 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:20.643383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2837) Dec 13 08:48:21.538237 kubelet[2753]: E1213 08:48:21.537413 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:23.160582 kubelet[2753]: E1213 08:48:23.159575 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:25.257044 kubelet[2753]: I1213 08:48:25.256134 2753 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:48:25.260784 containerd[1596]: time="2024-12-13T08:48:25.260608066Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:48:25.262039 kubelet[2753]: I1213 08:48:25.260942 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:48:25.867792 kubelet[2753]: I1213 08:48:25.867598 2753 topology_manager.go:215] "Topology Admit Handler" podUID="3cb2adf9-52d0-433c-afcb-909318767c7f" podNamespace="kube-system" podName="kube-proxy-gjlql" Dec 13 08:48:25.886547 kubelet[2753]: I1213 08:48:25.886504 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cb2adf9-52d0-433c-afcb-909318767c7f-xtables-lock\") pod \"kube-proxy-gjlql\" (UID: \"3cb2adf9-52d0-433c-afcb-909318767c7f\") " pod="kube-system/kube-proxy-gjlql" Dec 13 08:48:25.889570 kubelet[2753]: I1213 08:48:25.889396 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cb2adf9-52d0-433c-afcb-909318767c7f-kube-proxy\") pod \"kube-proxy-gjlql\" (UID: \"3cb2adf9-52d0-433c-afcb-909318767c7f\") " pod="kube-system/kube-proxy-gjlql" Dec 13 08:48:25.889570 kubelet[2753]: I1213 08:48:25.889453 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cb2adf9-52d0-433c-afcb-909318767c7f-lib-modules\") pod \"kube-proxy-gjlql\" (UID: \"3cb2adf9-52d0-433c-afcb-909318767c7f\") " pod="kube-system/kube-proxy-gjlql" Dec 13 08:48:25.889570 kubelet[2753]: I1213 08:48:25.889492 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wrb\" (UniqueName: \"kubernetes.io/projected/3cb2adf9-52d0-433c-afcb-909318767c7f-kube-api-access-d6wrb\") pod \"kube-proxy-gjlql\" (UID: \"3cb2adf9-52d0-433c-afcb-909318767c7f\") " pod="kube-system/kube-proxy-gjlql" Dec 13 08:48:25.903320 kubelet[2753]: I1213 08:48:25.902411 2753 topology_manager.go:215] "Topology Admit Handler" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" podNamespace="kube-system" podName="cilium-5grs2" Dec 13 08:48:25.990470 kubelet[2753]: I1213 08:48:25.990420 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hostproc\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990470 kubelet[2753]: I1213 08:48:25.990480 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-etc-cni-netd\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990513 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-xtables-lock\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990548 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-kernel\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990579 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt8zq\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-kube-api-access-jt8zq\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990611 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cni-path\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990640 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-lib-modules\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.990727 kubelet[2753]: I1213 08:48:25.990671 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-config-path\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990700 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hubble-tls\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990744 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-bpf-maps\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990777 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-cgroup\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990809 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-net\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990853 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-run\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:25.991058 kubelet[2753]: I1213 08:48:25.990885 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c31d8e36-82d0-42c3-9aa1-11a73a25155c-clustermesh-secrets\") pod \"cilium-5grs2\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " pod="kube-system/cilium-5grs2" Dec 13 08:48:26.173407 kubelet[2753]: E1213 08:48:26.173174 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.174250 containerd[1596]: time="2024-12-13T08:48:26.174133556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjlql,Uid:3cb2adf9-52d0-433c-afcb-909318767c7f,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:26.211231 kubelet[2753]: E1213 08:48:26.211094 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.213213 containerd[1596]: time="2024-12-13T08:48:26.212784192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5grs2,Uid:c31d8e36-82d0-42c3-9aa1-11a73a25155c,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:26.228240 containerd[1596]: time="2024-12-13T08:48:26.227272790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:26.228240 containerd[1596]: time="2024-12-13T08:48:26.228086308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:26.228240 containerd[1596]: time="2024-12-13T08:48:26.228101482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.229306 containerd[1596]: time="2024-12-13T08:48:26.228248786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.267256 containerd[1596]: time="2024-12-13T08:48:26.267074033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:26.268651 containerd[1596]: time="2024-12-13T08:48:26.267325779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:26.268651 containerd[1596]: time="2024-12-13T08:48:26.267362105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.268651 containerd[1596]: time="2024-12-13T08:48:26.267520122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.320631 containerd[1596]: time="2024-12-13T08:48:26.320556952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjlql,Uid:3cb2adf9-52d0-433c-afcb-909318767c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"704ccac99e8892ff4dd320fd0fc3f2ab09316b2b20e4637b14ed15f13ba9fcc4\"" Dec 13 08:48:26.322879 kubelet[2753]: E1213 08:48:26.322810 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.340216 containerd[1596]: time="2024-12-13T08:48:26.338433889Z" level=info msg="CreateContainer within sandbox \"704ccac99e8892ff4dd320fd0fc3f2ab09316b2b20e4637b14ed15f13ba9fcc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:48:26.380309 kubelet[2753]: I1213 08:48:26.375915 2753 topology_manager.go:215] "Topology Admit Handler" podUID="d8d1f803-cd52-4fcc-ae0a-660940990088" podNamespace="kube-system" podName="cilium-operator-5cc964979-nvs7v" Dec 13 08:48:26.380518 containerd[1596]: time="2024-12-13T08:48:26.378482626Z" level=info msg="CreateContainer within sandbox \"704ccac99e8892ff4dd320fd0fc3f2ab09316b2b20e4637b14ed15f13ba9fcc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2173f8062ac89e47540426386257c16cea08a5046c2c05b057186a2e8232bf12\"" Dec 13 08:48:26.384138 containerd[1596]: time="2024-12-13T08:48:26.384003097Z" level=info msg="StartContainer for \"2173f8062ac89e47540426386257c16cea08a5046c2c05b057186a2e8232bf12\"" Dec 13 08:48:26.394287 kubelet[2753]: I1213 08:48:26.394237 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qrt\" (UniqueName: \"kubernetes.io/projected/d8d1f803-cd52-4fcc-ae0a-660940990088-kube-api-access-r4qrt\") pod \"cilium-operator-5cc964979-nvs7v\" (UID: \"d8d1f803-cd52-4fcc-ae0a-660940990088\") " pod="kube-system/cilium-operator-5cc964979-nvs7v" Dec 13 08:48:26.394945 kubelet[2753]: I1213 08:48:26.394567 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8d1f803-cd52-4fcc-ae0a-660940990088-cilium-config-path\") pod \"cilium-operator-5cc964979-nvs7v\" (UID: \"d8d1f803-cd52-4fcc-ae0a-660940990088\") " pod="kube-system/cilium-operator-5cc964979-nvs7v" Dec 13 08:48:26.396052 containerd[1596]: time="2024-12-13T08:48:26.395772945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5grs2,Uid:c31d8e36-82d0-42c3-9aa1-11a73a25155c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\"" Dec 13 08:48:26.399475 kubelet[2753]: E1213 08:48:26.398132 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.403625 containerd[1596]: time="2024-12-13T08:48:26.403568813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 08:48:26.547219 containerd[1596]: time="2024-12-13T08:48:26.545660813Z" level=info msg="StartContainer for \"2173f8062ac89e47540426386257c16cea08a5046c2c05b057186a2e8232bf12\" returns successfully" Dec 13 08:48:26.562543 kubelet[2753]: E1213 08:48:26.562437 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.691613 kubelet[2753]: E1213 08:48:26.690699 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:26.693173 containerd[1596]: time="2024-12-13T08:48:26.692576080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nvs7v,Uid:d8d1f803-cd52-4fcc-ae0a-660940990088,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:26.741423 containerd[1596]: time="2024-12-13T08:48:26.740953570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:26.741423 containerd[1596]: time="2024-12-13T08:48:26.741033780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:26.741423 containerd[1596]: time="2024-12-13T08:48:26.741059145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.743292 containerd[1596]: time="2024-12-13T08:48:26.742469429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:26.853169 containerd[1596]: time="2024-12-13T08:48:26.852759928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nvs7v,Uid:d8d1f803-cd52-4fcc-ae0a-660940990088,Namespace:kube-system,Attempt:0,} returns sandbox id \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\"" Dec 13 08:48:26.854593 kubelet[2753]: E1213 08:48:26.854529 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:32.476623 kubelet[2753]: I1213 08:48:32.476233 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gjlql" podStartSLOduration=7.476156847 podStartE2EDuration="7.476156847s" podCreationTimestamp="2024-12-13 08:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:26.582367427 +0000 UTC m=+14.345046411" watchObservedRunningTime="2024-12-13 08:48:32.476156847 +0000 UTC m=+20.238835830" Dec 13 08:48:35.396514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478947371.mount: Deactivated successfully. Dec 13 08:48:37.746459 containerd[1596]: time="2024-12-13T08:48:37.746221051Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:37.750045 containerd[1596]: time="2024-12-13T08:48:37.749029191Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734723" Dec 13 08:48:37.750045 containerd[1596]: time="2024-12-13T08:48:37.749692651Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:37.759308 containerd[1596]: time="2024-12-13T08:48:37.759246970Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.355285815s" Dec 13 08:48:37.759916 containerd[1596]: time="2024-12-13T08:48:37.759855394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 08:48:37.761391 containerd[1596]: time="2024-12-13T08:48:37.761346876Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 08:48:37.767045 containerd[1596]: time="2024-12-13T08:48:37.766877018Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 08:48:37.830537 containerd[1596]: time="2024-12-13T08:48:37.830476465Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\"" Dec 13 08:48:37.830623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405550490.mount: Deactivated successfully. Dec 13 08:48:37.832582 containerd[1596]: time="2024-12-13T08:48:37.832321444Z" level=info msg="StartContainer for \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\"" Dec 13 08:48:37.969212 containerd[1596]: time="2024-12-13T08:48:37.968894928Z" level=info msg="StartContainer for \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\" returns successfully" Dec 13 08:48:38.134913 containerd[1596]: time="2024-12-13T08:48:38.122344107Z" level=info msg="shim disconnected" id=6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a namespace=k8s.io Dec 13 08:48:38.134913 containerd[1596]: time="2024-12-13T08:48:38.133984953Z" level=warning msg="cleaning up after shim disconnected" id=6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a namespace=k8s.io Dec 13 08:48:38.134913 containerd[1596]: time="2024-12-13T08:48:38.134007584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:38.616423 kubelet[2753]: E1213 08:48:38.616381 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:38.625447 containerd[1596]: time="2024-12-13T08:48:38.622771518Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 08:48:38.646826 containerd[1596]: time="2024-12-13T08:48:38.646703399Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\"" Dec 13 08:48:38.648258 containerd[1596]: time="2024-12-13T08:48:38.647491774Z" level=info msg="StartContainer for \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\"" Dec 13 08:48:38.732488 containerd[1596]: time="2024-12-13T08:48:38.732425925Z" level=info msg="StartContainer for \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\" returns successfully" Dec 13 08:48:38.753451 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:48:38.754131 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:48:38.754242 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:48:38.761725 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:48:38.812883 containerd[1596]: time="2024-12-13T08:48:38.812309703Z" level=info msg="shim disconnected" id=06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db namespace=k8s.io Dec 13 08:48:38.812883 containerd[1596]: time="2024-12-13T08:48:38.812581840Z" level=warning msg="cleaning up after shim disconnected" id=06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db namespace=k8s.io Dec 13 08:48:38.812883 containerd[1596]: time="2024-12-13T08:48:38.812711895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:38.815978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:48:38.827532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a-rootfs.mount: Deactivated successfully. Dec 13 08:48:38.841940 containerd[1596]: time="2024-12-13T08:48:38.841382954Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:48:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:48:39.615994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179741841.mount: Deactivated successfully. Dec 13 08:48:39.623948 kubelet[2753]: E1213 08:48:39.623916 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:39.634306 containerd[1596]: time="2024-12-13T08:48:39.633108385Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 08:48:39.684232 containerd[1596]: time="2024-12-13T08:48:39.684004065Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\"" Dec 13 08:48:39.686504 containerd[1596]: time="2024-12-13T08:48:39.685914920Z" level=info msg="StartContainer for \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\"" Dec 13 08:48:39.814947 containerd[1596]: time="2024-12-13T08:48:39.814891914Z" level=info msg="StartContainer for \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\" returns successfully" Dec 13 08:48:39.879324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e-rootfs.mount: Deactivated successfully. Dec 13 08:48:39.885575 containerd[1596]: time="2024-12-13T08:48:39.885445522Z" level=info msg="shim disconnected" id=1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e namespace=k8s.io Dec 13 08:48:39.885575 containerd[1596]: time="2024-12-13T08:48:39.885557855Z" level=warning msg="cleaning up after shim disconnected" id=1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e namespace=k8s.io Dec 13 08:48:39.885575 containerd[1596]: time="2024-12-13T08:48:39.885571502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:40.502967 containerd[1596]: time="2024-12-13T08:48:40.497170070Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:40.502967 containerd[1596]: time="2024-12-13T08:48:40.500474564Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Dec 13 08:48:40.505060 containerd[1596]: time="2024-12-13T08:48:40.504983025Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:48:40.509979 containerd[1596]: time="2024-12-13T08:48:40.509513715Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.748122114s" Dec 13 08:48:40.509979 containerd[1596]: time="2024-12-13T08:48:40.509594016Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 08:48:40.513759 containerd[1596]: time="2024-12-13T08:48:40.513694167Z" level=info msg="CreateContainer within sandbox \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 08:48:40.537035 containerd[1596]: time="2024-12-13T08:48:40.536969110Z" level=info msg="CreateContainer within sandbox \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\"" Dec 13 08:48:40.538619 containerd[1596]: time="2024-12-13T08:48:40.538561565Z" level=info msg="StartContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\"" Dec 13 08:48:40.624673 containerd[1596]: time="2024-12-13T08:48:40.624619713Z" level=info msg="StartContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" returns successfully" Dec 13 08:48:40.637653 kubelet[2753]: E1213 08:48:40.637582 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:40.646295 containerd[1596]: time="2024-12-13T08:48:40.645693447Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 08:48:40.680150 containerd[1596]: time="2024-12-13T08:48:40.680082906Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\"" Dec 13 08:48:40.682152 containerd[1596]: time="2024-12-13T08:48:40.681533453Z" level=info msg="StartContainer for \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\"" Dec 13 08:48:40.797403 containerd[1596]: time="2024-12-13T08:48:40.795361807Z" level=info msg="StartContainer for \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\" returns successfully" Dec 13 08:48:40.861813 containerd[1596]: time="2024-12-13T08:48:40.859879235Z" level=info msg="shim disconnected" id=c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2 namespace=k8s.io Dec 13 08:48:40.861813 containerd[1596]: time="2024-12-13T08:48:40.859958202Z" level=warning msg="cleaning up after shim disconnected" id=c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2 namespace=k8s.io Dec 13 08:48:40.861813 containerd[1596]: time="2024-12-13T08:48:40.859970056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:48:40.860942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2-rootfs.mount: Deactivated successfully. Dec 13 08:48:41.667297 kubelet[2753]: E1213 08:48:41.667251 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:41.667912 kubelet[2753]: E1213 08:48:41.667883 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:41.695094 containerd[1596]: time="2024-12-13T08:48:41.694932987Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 08:48:41.781255 containerd[1596]: time="2024-12-13T08:48:41.780456616Z" level=info msg="CreateContainer within sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\"" Dec 13 08:48:41.783108 containerd[1596]: time="2024-12-13T08:48:41.783055944Z" level=info msg="StartContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\"" Dec 13 08:48:41.979631 containerd[1596]: time="2024-12-13T08:48:41.979457105Z" level=info msg="StartContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" returns successfully" Dec 13 08:48:42.152097 systemd[1]: run-containerd-runc-k8s.io-2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346-runc.EycRZv.mount: Deactivated successfully. Dec 13 08:48:42.275337 kubelet[2753]: I1213 08:48:42.275308 2753 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:48:42.329436 kubelet[2753]: I1213 08:48:42.325991 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-nvs7v" podStartSLOduration=2.671544738 podStartE2EDuration="16.325929191s" podCreationTimestamp="2024-12-13 08:48:26 +0000 UTC" firstStartedPulling="2024-12-13 08:48:26.855482558 +0000 UTC m=+14.618161523" lastFinishedPulling="2024-12-13 08:48:40.509867012 +0000 UTC m=+28.272545976" observedRunningTime="2024-12-13 08:48:41.87250484 +0000 UTC m=+29.635183826" watchObservedRunningTime="2024-12-13 08:48:42.325929191 +0000 UTC m=+30.088608207" Dec 13 08:48:42.329436 kubelet[2753]: I1213 08:48:42.326172 2753 topology_manager.go:215] "Topology Admit Handler" podUID="a2d96a3c-8752-4b56-b361-0e9340efae30" podNamespace="kube-system" podName="coredns-76f75df574-q7zcq" Dec 13 08:48:42.333436 kubelet[2753]: I1213 08:48:42.332183 2753 topology_manager.go:215] "Topology Admit Handler" podUID="52f7659f-4330-4516-89e3-9e9e73ed0302" podNamespace="kube-system" podName="coredns-76f75df574-wvbbq" Dec 13 08:48:42.431090 kubelet[2753]: I1213 08:48:42.431038 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st9rr\" (UniqueName: \"kubernetes.io/projected/a2d96a3c-8752-4b56-b361-0e9340efae30-kube-api-access-st9rr\") pod \"coredns-76f75df574-q7zcq\" (UID: \"a2d96a3c-8752-4b56-b361-0e9340efae30\") " pod="kube-system/coredns-76f75df574-q7zcq" Dec 13 08:48:42.431090 kubelet[2753]: I1213 08:48:42.431101 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5w47\" (UniqueName: \"kubernetes.io/projected/52f7659f-4330-4516-89e3-9e9e73ed0302-kube-api-access-l5w47\") pod \"coredns-76f75df574-wvbbq\" (UID: \"52f7659f-4330-4516-89e3-9e9e73ed0302\") " pod="kube-system/coredns-76f75df574-wvbbq" Dec 13 08:48:42.431371 kubelet[2753]: I1213 08:48:42.431139 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2d96a3c-8752-4b56-b361-0e9340efae30-config-volume\") pod \"coredns-76f75df574-q7zcq\" (UID: \"a2d96a3c-8752-4b56-b361-0e9340efae30\") " pod="kube-system/coredns-76f75df574-q7zcq" Dec 13 08:48:42.431371 kubelet[2753]: I1213 08:48:42.431178 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52f7659f-4330-4516-89e3-9e9e73ed0302-config-volume\") pod \"coredns-76f75df574-wvbbq\" (UID: \"52f7659f-4330-4516-89e3-9e9e73ed0302\") " pod="kube-system/coredns-76f75df574-wvbbq" Dec 13 08:48:42.658793 kubelet[2753]: E1213 08:48:42.655535 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:42.658793 kubelet[2753]: E1213 08:48:42.658407 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:42.659011 containerd[1596]: time="2024-12-13T08:48:42.657174445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q7zcq,Uid:a2d96a3c-8752-4b56-b361-0e9340efae30,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:42.659685 containerd[1596]: time="2024-12-13T08:48:42.659614722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wvbbq,Uid:52f7659f-4330-4516-89e3-9e9e73ed0302,Namespace:kube-system,Attempt:0,}" Dec 13 08:48:42.680146 kubelet[2753]: E1213 08:48:42.680077 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:42.684389 kubelet[2753]: E1213 08:48:42.682491 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:42.733111 kubelet[2753]: I1213 08:48:42.732012 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5grs2" podStartSLOduration=6.374205342 podStartE2EDuration="17.731954313s" podCreationTimestamp="2024-12-13 08:48:25 +0000 UTC" firstStartedPulling="2024-12-13 08:48:26.402655309 +0000 UTC m=+14.165334285" lastFinishedPulling="2024-12-13 08:48:37.760404293 +0000 UTC m=+25.523083256" observedRunningTime="2024-12-13 08:48:42.727013857 +0000 UTC m=+30.489692852" watchObservedRunningTime="2024-12-13 08:48:42.731954313 +0000 UTC m=+30.494633293" Dec 13 08:48:43.679419 kubelet[2753]: E1213 08:48:43.679376 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:44.554319 systemd-networkd[1219]: cilium_host: Link UP Dec 13 08:48:44.556728 systemd-networkd[1219]: cilium_net: Link UP Dec 13 08:48:44.556735 systemd-networkd[1219]: cilium_net: Gained carrier Dec 13 08:48:44.557369 systemd-networkd[1219]: cilium_host: Gained carrier Dec 13 08:48:44.683149 kubelet[2753]: E1213 08:48:44.683076 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:44.709935 systemd-networkd[1219]: cilium_net: Gained IPv6LL Dec 13 08:48:44.716864 systemd-networkd[1219]: cilium_vxlan: Link UP Dec 13 08:48:44.716875 systemd-networkd[1219]: cilium_vxlan: Gained carrier Dec 13 08:48:45.208300 kernel: NET: Registered PF_ALG protocol family Dec 13 08:48:45.461173 systemd-networkd[1219]: cilium_host: Gained IPv6LL Dec 13 08:48:46.146398 systemd-networkd[1219]: lxc_health: Link UP Dec 13 08:48:46.161576 systemd-networkd[1219]: lxc_health: Gained carrier Dec 13 08:48:46.217219 kubelet[2753]: E1213 08:48:46.215780 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:46.549404 systemd-networkd[1219]: cilium_vxlan: Gained IPv6LL Dec 13 08:48:46.686524 kubelet[2753]: E1213 08:48:46.686465 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:46.805863 systemd-networkd[1219]: lxcc27585ea1b2b: Link UP Dec 13 08:48:46.814140 kernel: eth0: renamed from tmp782af Dec 13 08:48:46.823673 systemd-networkd[1219]: lxcc27585ea1b2b: Gained carrier Dec 13 08:48:46.868740 systemd-networkd[1219]: lxcf302cd9a7fb0: Link UP Dec 13 08:48:46.875241 kernel: eth0: renamed from tmpe9c30 Dec 13 08:48:46.889345 systemd-networkd[1219]: lxcf302cd9a7fb0: Gained carrier Dec 13 08:48:47.764833 systemd-networkd[1219]: lxc_health: Gained IPv6LL Dec 13 08:48:48.404654 systemd-networkd[1219]: lxcf302cd9a7fb0: Gained IPv6LL Dec 13 08:48:48.661329 systemd-networkd[1219]: lxcc27585ea1b2b: Gained IPv6LL Dec 13 08:48:51.029761 systemd[1]: Started sshd@9-143.198.66.7:22-147.75.109.163:32954.service - OpenSSH per-connection server daemon (147.75.109.163:32954). Dec 13 08:48:51.093281 sshd[3972]: Accepted publickey for core from 147.75.109.163 port 32954 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:51.094249 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:51.103806 systemd-logind[1573]: New session 8 of user core. Dec 13 08:48:51.110939 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 08:48:51.742287 sshd[3972]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:51.746323 systemd[1]: sshd@9-143.198.66.7:22-147.75.109.163:32954.service: Deactivated successfully. Dec 13 08:48:51.755993 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Dec 13 08:48:51.757084 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 08:48:51.758660 systemd-logind[1573]: Removed session 8. Dec 13 08:48:52.132158 containerd[1596]: time="2024-12-13T08:48:52.130097635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:52.132158 containerd[1596]: time="2024-12-13T08:48:52.130203895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:52.132158 containerd[1596]: time="2024-12-13T08:48:52.130232745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.135377 containerd[1596]: time="2024-12-13T08:48:52.130354062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.238586 containerd[1596]: time="2024-12-13T08:48:52.233084735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:48:52.238586 containerd[1596]: time="2024-12-13T08:48:52.236985088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:48:52.238586 containerd[1596]: time="2024-12-13T08:48:52.237014181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.238586 containerd[1596]: time="2024-12-13T08:48:52.237157782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:48:52.324437 containerd[1596]: time="2024-12-13T08:48:52.324398745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wvbbq,Uid:52f7659f-4330-4516-89e3-9e9e73ed0302,Namespace:kube-system,Attempt:0,} returns sandbox id \"782afe861f39e2a2e0226bb4587b60049a6fbefc62f3e2e0165ee771ceda7bf0\"" Dec 13 08:48:52.328282 kubelet[2753]: E1213 08:48:52.327958 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:52.343281 containerd[1596]: time="2024-12-13T08:48:52.343024762Z" level=info msg="CreateContainer within sandbox \"782afe861f39e2a2e0226bb4587b60049a6fbefc62f3e2e0165ee771ceda7bf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:48:52.392876 containerd[1596]: time="2024-12-13T08:48:52.392514590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q7zcq,Uid:a2d96a3c-8752-4b56-b361-0e9340efae30,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9c30a705a7141d569040a85e16278ff3cbf18c8c3e531b5d9765502bd900513\"" Dec 13 08:48:52.395427 kubelet[2753]: E1213 08:48:52.395010 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:52.398776 containerd[1596]: time="2024-12-13T08:48:52.398609583Z" level=info msg="CreateContainer within sandbox \"e9c30a705a7141d569040a85e16278ff3cbf18c8c3e531b5d9765502bd900513\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:48:52.403241 containerd[1596]: time="2024-12-13T08:48:52.402291734Z" level=info msg="CreateContainer within sandbox \"782afe861f39e2a2e0226bb4587b60049a6fbefc62f3e2e0165ee771ceda7bf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e364b8dbe2ac4cf3ca6cb7665936302e9f23a9a29fa41149d5264ebd9d8d87a\"" Dec 13 08:48:52.405220 containerd[1596]: time="2024-12-13T08:48:52.404854853Z" level=info msg="StartContainer for \"3e364b8dbe2ac4cf3ca6cb7665936302e9f23a9a29fa41149d5264ebd9d8d87a\"" Dec 13 08:48:52.441939 containerd[1596]: time="2024-12-13T08:48:52.441898060Z" level=info msg="CreateContainer within sandbox \"e9c30a705a7141d569040a85e16278ff3cbf18c8c3e531b5d9765502bd900513\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c09e6f25379aa1d46d7c90624f12fb171a7a902af5118cd58298d4a7c264b5aa\"" Dec 13 08:48:52.443270 containerd[1596]: time="2024-12-13T08:48:52.442734111Z" level=info msg="StartContainer for \"c09e6f25379aa1d46d7c90624f12fb171a7a902af5118cd58298d4a7c264b5aa\"" Dec 13 08:48:52.505636 containerd[1596]: time="2024-12-13T08:48:52.505505170Z" level=info msg="StartContainer for \"3e364b8dbe2ac4cf3ca6cb7665936302e9f23a9a29fa41149d5264ebd9d8d87a\" returns successfully" Dec 13 08:48:52.528784 containerd[1596]: time="2024-12-13T08:48:52.528655767Z" level=info msg="StartContainer for \"c09e6f25379aa1d46d7c90624f12fb171a7a902af5118cd58298d4a7c264b5aa\" returns successfully" Dec 13 08:48:52.713768 kubelet[2753]: E1213 08:48:52.713438 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:52.720695 kubelet[2753]: E1213 08:48:52.720413 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:52.730926 kubelet[2753]: I1213 08:48:52.730878 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q7zcq" podStartSLOduration=26.730821856 podStartE2EDuration="26.730821856s" podCreationTimestamp="2024-12-13 08:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:52.728626301 +0000 UTC m=+40.491305289" watchObservedRunningTime="2024-12-13 08:48:52.730821856 +0000 UTC m=+40.493500839" Dec 13 08:48:52.746198 kubelet[2753]: I1213 08:48:52.746038 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wvbbq" podStartSLOduration=26.745983708 podStartE2EDuration="26.745983708s" podCreationTimestamp="2024-12-13 08:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:48:52.745892577 +0000 UTC m=+40.508571558" watchObservedRunningTime="2024-12-13 08:48:52.745983708 +0000 UTC m=+40.508662692" Dec 13 08:48:53.154729 systemd[1]: run-containerd-runc-k8s.io-e9c30a705a7141d569040a85e16278ff3cbf18c8c3e531b5d9765502bd900513-runc.FrhyNo.mount: Deactivated successfully. Dec 13 08:48:53.523550 systemd[1]: Started sshd@10-143.198.66.7:22-77.91.87.131:50512.service - OpenSSH per-connection server daemon (77.91.87.131:50512). Dec 13 08:48:53.722789 kubelet[2753]: E1213 08:48:53.722284 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:53.722789 kubelet[2753]: E1213 08:48:53.722502 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:54.488638 sshd[4150]: Invalid user bet from 77.91.87.131 port 50512 Dec 13 08:48:54.667369 sshd[4150]: Received disconnect from 77.91.87.131 port 50512:11: Bye Bye [preauth] Dec 13 08:48:54.667369 sshd[4150]: Disconnected from invalid user bet 77.91.87.131 port 50512 [preauth] Dec 13 08:48:54.669874 systemd[1]: sshd@10-143.198.66.7:22-77.91.87.131:50512.service: Deactivated successfully. Dec 13 08:48:54.724242 kubelet[2753]: E1213 08:48:54.724086 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:54.724242 kubelet[2753]: E1213 08:48:54.724209 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:48:56.757634 systemd[1]: Started sshd@11-143.198.66.7:22-147.75.109.163:55464.service - OpenSSH per-connection server daemon (147.75.109.163:55464). Dec 13 08:48:56.814112 sshd[4162]: Accepted publickey for core from 147.75.109.163 port 55464 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:48:56.816404 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:48:56.821867 systemd-logind[1573]: New session 9 of user core. Dec 13 08:48:56.830535 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 08:48:57.015523 sshd[4162]: pam_unix(sshd:session): session closed for user core Dec 13 08:48:57.022553 systemd[1]: sshd@11-143.198.66.7:22-147.75.109.163:55464.service: Deactivated successfully. Dec 13 08:48:57.027232 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Dec 13 08:48:57.027973 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 08:48:57.030006 systemd-logind[1573]: Removed session 9. Dec 13 08:49:00.922865 systemd[1]: Started sshd@12-143.198.66.7:22-87.14.61.37:34262.service - OpenSSH per-connection server daemon (87.14.61.37:34262). Dec 13 08:49:01.850075 sshd[4179]: Invalid user asdf from 87.14.61.37 port 34262 Dec 13 08:49:02.023222 sshd[4179]: Received disconnect from 87.14.61.37 port 34262:11: Bye Bye [preauth] Dec 13 08:49:02.023222 sshd[4179]: Disconnected from invalid user asdf 87.14.61.37 port 34262 [preauth] Dec 13 08:49:02.029756 systemd[1]: Started sshd@13-143.198.66.7:22-147.75.109.163:55466.service - OpenSSH per-connection server daemon (147.75.109.163:55466). Dec 13 08:49:02.033815 systemd[1]: sshd@12-143.198.66.7:22-87.14.61.37:34262.service: Deactivated successfully. Dec 13 08:49:02.108604 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 55466 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:02.110752 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:02.122591 systemd-logind[1573]: New session 10 of user core. Dec 13 08:49:02.128824 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 08:49:02.350609 sshd[4182]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:02.355648 systemd[1]: sshd@13-143.198.66.7:22-147.75.109.163:55466.service: Deactivated successfully. Dec 13 08:49:02.366920 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 08:49:02.366981 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Dec 13 08:49:02.371028 systemd-logind[1573]: Removed session 10. Dec 13 08:49:07.365645 systemd[1]: Started sshd@14-143.198.66.7:22-147.75.109.163:43494.service - OpenSSH per-connection server daemon (147.75.109.163:43494). Dec 13 08:49:07.425008 sshd[4198]: Accepted publickey for core from 147.75.109.163 port 43494 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:07.428033 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:07.439220 systemd-logind[1573]: New session 11 of user core. Dec 13 08:49:07.445742 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 08:49:07.605126 sshd[4198]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:07.610001 systemd[1]: sshd@14-143.198.66.7:22-147.75.109.163:43494.service: Deactivated successfully. Dec 13 08:49:07.615710 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 08:49:07.617628 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Dec 13 08:49:07.618787 systemd-logind[1573]: Removed session 11. Dec 13 08:49:12.618636 systemd[1]: Started sshd@15-143.198.66.7:22-147.75.109.163:43500.service - OpenSSH per-connection server daemon (147.75.109.163:43500). Dec 13 08:49:12.687771 sshd[4214]: Accepted publickey for core from 147.75.109.163 port 43500 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:12.690263 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:12.699369 systemd-logind[1573]: New session 12 of user core. Dec 13 08:49:12.707419 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 08:49:12.876571 sshd[4214]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:12.887993 systemd[1]: Started sshd@16-143.198.66.7:22-147.75.109.163:43504.service - OpenSSH per-connection server daemon (147.75.109.163:43504). Dec 13 08:49:12.888851 systemd[1]: sshd@15-143.198.66.7:22-147.75.109.163:43500.service: Deactivated successfully. Dec 13 08:49:12.895367 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 08:49:12.898471 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Dec 13 08:49:12.901485 systemd-logind[1573]: Removed session 12. Dec 13 08:49:12.949678 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 43504 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:12.952036 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:12.964302 systemd-logind[1573]: New session 13 of user core. Dec 13 08:49:12.974793 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 08:49:13.183300 sshd[4225]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:13.192595 systemd[1]: Started sshd@17-143.198.66.7:22-147.75.109.163:43510.service - OpenSSH per-connection server daemon (147.75.109.163:43510). Dec 13 08:49:13.193430 systemd[1]: sshd@16-143.198.66.7:22-147.75.109.163:43504.service: Deactivated successfully. Dec 13 08:49:13.206316 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 08:49:13.216783 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Dec 13 08:49:13.219381 systemd-logind[1573]: Removed session 13. Dec 13 08:49:13.280752 sshd[4237]: Accepted publickey for core from 147.75.109.163 port 43510 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:13.283147 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:13.292740 systemd-logind[1573]: New session 14 of user core. Dec 13 08:49:13.299673 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 08:49:13.460258 sshd[4237]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:13.466316 systemd[1]: sshd@17-143.198.66.7:22-147.75.109.163:43510.service: Deactivated successfully. Dec 13 08:49:13.473636 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 08:49:13.475917 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Dec 13 08:49:13.477734 systemd-logind[1573]: Removed session 14. Dec 13 08:49:18.482766 systemd[1]: Started sshd@18-143.198.66.7:22-147.75.109.163:41276.service - OpenSSH per-connection server daemon (147.75.109.163:41276). Dec 13 08:49:18.534804 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 41276 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:18.537258 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:18.545352 systemd-logind[1573]: New session 15 of user core. Dec 13 08:49:18.551750 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 08:49:18.703167 sshd[4255]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:18.708254 systemd[1]: sshd@18-143.198.66.7:22-147.75.109.163:41276.service: Deactivated successfully. Dec 13 08:49:18.712945 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 08:49:18.713932 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Dec 13 08:49:18.716032 systemd-logind[1573]: Removed session 15. Dec 13 08:49:22.453131 kubelet[2753]: E1213 08:49:22.453091 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:23.712690 systemd[1]: Started sshd@19-143.198.66.7:22-147.75.109.163:41290.service - OpenSSH per-connection server daemon (147.75.109.163:41290). Dec 13 08:49:23.774975 sshd[4269]: Accepted publickey for core from 147.75.109.163 port 41290 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:23.777906 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:23.786367 systemd-logind[1573]: New session 16 of user core. Dec 13 08:49:23.791702 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 08:49:23.960610 sshd[4269]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:23.972739 systemd[1]: Started sshd@20-143.198.66.7:22-147.75.109.163:41304.service - OpenSSH per-connection server daemon (147.75.109.163:41304). Dec 13 08:49:23.976952 systemd[1]: sshd@19-143.198.66.7:22-147.75.109.163:41290.service: Deactivated successfully. Dec 13 08:49:23.984776 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 08:49:23.987504 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Dec 13 08:49:23.989085 systemd-logind[1573]: Removed session 16. Dec 13 08:49:24.029380 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 41304 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:24.032225 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:24.040892 systemd-logind[1573]: New session 17 of user core. Dec 13 08:49:24.047798 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 08:49:24.505226 sshd[4280]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:24.515720 systemd[1]: Started sshd@21-143.198.66.7:22-147.75.109.163:41312.service - OpenSSH per-connection server daemon (147.75.109.163:41312). Dec 13 08:49:24.516246 systemd[1]: sshd@20-143.198.66.7:22-147.75.109.163:41304.service: Deactivated successfully. Dec 13 08:49:24.523220 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 08:49:24.527321 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Dec 13 08:49:24.533748 systemd-logind[1573]: Removed session 17. Dec 13 08:49:24.589503 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 41312 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:24.592833 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:24.600358 systemd-logind[1573]: New session 18 of user core. Dec 13 08:49:24.606616 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 08:49:25.510712 systemd[1]: Started sshd@22-143.198.66.7:22-139.59.101.197:37000.service - OpenSSH per-connection server daemon (139.59.101.197:37000). Dec 13 08:49:26.475548 sshd[4306]: Invalid user zel from 139.59.101.197 port 37000 Dec 13 08:49:26.659362 sshd[4306]: Received disconnect from 139.59.101.197 port 37000:11: Bye Bye [preauth] Dec 13 08:49:26.659362 sshd[4306]: Disconnected from invalid user zel 139.59.101.197 port 37000 [preauth] Dec 13 08:49:26.662646 systemd[1]: sshd@22-143.198.66.7:22-139.59.101.197:37000.service: Deactivated successfully. Dec 13 08:49:26.969299 sshd[4292]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:26.988790 systemd[1]: Started sshd@23-143.198.66.7:22-147.75.109.163:47644.service - OpenSSH per-connection server daemon (147.75.109.163:47644). Dec 13 08:49:26.989768 systemd[1]: sshd@21-143.198.66.7:22-147.75.109.163:41312.service: Deactivated successfully. Dec 13 08:49:27.012175 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 08:49:27.015443 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Dec 13 08:49:27.020214 systemd-logind[1573]: Removed session 18. Dec 13 08:49:27.066916 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 47644 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:27.072293 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:27.084279 systemd-logind[1573]: New session 19 of user core. Dec 13 08:49:27.090636 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 08:49:27.526511 sshd[4319]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:27.532164 systemd[1]: sshd@23-143.198.66.7:22-147.75.109.163:47644.service: Deactivated successfully. Dec 13 08:49:27.541943 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 08:49:27.548163 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Dec 13 08:49:27.557679 systemd[1]: Started sshd@24-143.198.66.7:22-147.75.109.163:47648.service - OpenSSH per-connection server daemon (147.75.109.163:47648). Dec 13 08:49:27.561816 systemd-logind[1573]: Removed session 19. Dec 13 08:49:27.617231 sshd[4334]: Accepted publickey for core from 147.75.109.163 port 47648 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:27.619687 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:27.629757 systemd-logind[1573]: New session 20 of user core. Dec 13 08:49:27.635842 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 08:49:27.816515 sshd[4334]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:27.821760 systemd[1]: sshd@24-143.198.66.7:22-147.75.109.163:47648.service: Deactivated successfully. Dec 13 08:49:27.828612 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 08:49:27.828685 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Dec 13 08:49:27.833458 systemd-logind[1573]: Removed session 20. Dec 13 08:49:32.825821 systemd[1]: Started sshd@25-143.198.66.7:22-147.75.109.163:47662.service - OpenSSH per-connection server daemon (147.75.109.163:47662). Dec 13 08:49:32.890168 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 47662 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:32.892540 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:32.898195 systemd-logind[1573]: New session 21 of user core. Dec 13 08:49:32.905765 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 08:49:33.054529 sshd[4348]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:33.063534 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Dec 13 08:49:33.064521 systemd[1]: sshd@25-143.198.66.7:22-147.75.109.163:47662.service: Deactivated successfully. Dec 13 08:49:33.070645 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 08:49:33.073912 systemd-logind[1573]: Removed session 21. Dec 13 08:49:38.064568 systemd[1]: Started sshd@26-143.198.66.7:22-147.75.109.163:51738.service - OpenSSH per-connection server daemon (147.75.109.163:51738). Dec 13 08:49:38.114065 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 51738 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:38.116012 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:38.121803 systemd-logind[1573]: New session 22 of user core. Dec 13 08:49:38.130156 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 08:49:38.277528 sshd[4365]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:38.282266 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Dec 13 08:49:38.282450 systemd[1]: sshd@26-143.198.66.7:22-147.75.109.163:51738.service: Deactivated successfully. Dec 13 08:49:38.290940 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 08:49:38.292591 systemd-logind[1573]: Removed session 22. Dec 13 08:49:39.450708 kubelet[2753]: E1213 08:49:39.450570 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:43.288675 systemd[1]: Started sshd@27-143.198.66.7:22-147.75.109.163:51746.service - OpenSSH per-connection server daemon (147.75.109.163:51746). Dec 13 08:49:43.339021 sshd[4378]: Accepted publickey for core from 147.75.109.163 port 51746 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:43.341609 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:43.349733 systemd-logind[1573]: New session 23 of user core. Dec 13 08:49:43.363970 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 08:49:43.522803 sshd[4378]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:43.527859 systemd[1]: sshd@27-143.198.66.7:22-147.75.109.163:51746.service: Deactivated successfully. Dec 13 08:49:43.534460 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 08:49:43.537251 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Dec 13 08:49:43.539737 systemd-logind[1573]: Removed session 23. Dec 13 08:49:44.450946 kubelet[2753]: E1213 08:49:44.450708 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:45.451206 kubelet[2753]: E1213 08:49:45.450986 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:48.535691 systemd[1]: Started sshd@28-143.198.66.7:22-147.75.109.163:40166.service - OpenSSH per-connection server daemon (147.75.109.163:40166). Dec 13 08:49:48.586022 sshd[4391]: Accepted publickey for core from 147.75.109.163 port 40166 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:48.588521 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:48.597090 systemd-logind[1573]: New session 24 of user core. Dec 13 08:49:48.608798 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 08:49:48.754531 sshd[4391]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:48.764377 systemd[1]: Started sshd@29-143.198.66.7:22-147.75.109.163:40172.service - OpenSSH per-connection server daemon (147.75.109.163:40172). Dec 13 08:49:48.766572 systemd[1]: sshd@28-143.198.66.7:22-147.75.109.163:40166.service: Deactivated successfully. Dec 13 08:49:48.771057 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 08:49:48.777016 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Dec 13 08:49:48.779151 systemd-logind[1573]: Removed session 24. Dec 13 08:49:48.822476 sshd[4402]: Accepted publickey for core from 147.75.109.163 port 40172 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:48.824223 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:48.833229 systemd-logind[1573]: New session 25 of user core. Dec 13 08:49:48.838677 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 08:49:50.387890 containerd[1596]: time="2024-12-13T08:49:50.387766014Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:49:50.397374 containerd[1596]: time="2024-12-13T08:49:50.396676907Z" level=info msg="StopContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" with timeout 30 (s)" Dec 13 08:49:50.397374 containerd[1596]: time="2024-12-13T08:49:50.397249137Z" level=info msg="Stop container \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" with signal terminated" Dec 13 08:49:50.401478 containerd[1596]: time="2024-12-13T08:49:50.401241702Z" level=info msg="StopContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" with timeout 2 (s)" Dec 13 08:49:50.401901 containerd[1596]: time="2024-12-13T08:49:50.401796734Z" level=info msg="Stop container \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" with signal terminated" Dec 13 08:49:50.436486 systemd-networkd[1219]: lxc_health: Link DOWN Dec 13 08:49:50.436498 systemd-networkd[1219]: lxc_health: Lost carrier Dec 13 08:49:50.515444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8-rootfs.mount: Deactivated successfully. Dec 13 08:49:50.519032 containerd[1596]: time="2024-12-13T08:49:50.517570444Z" level=info msg="shim disconnected" id=9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8 namespace=k8s.io Dec 13 08:49:50.519032 containerd[1596]: time="2024-12-13T08:49:50.517632462Z" level=warning msg="cleaning up after shim disconnected" id=9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8 namespace=k8s.io Dec 13 08:49:50.519032 containerd[1596]: time="2024-12-13T08:49:50.517641112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:50.537937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346-rootfs.mount: Deactivated successfully. Dec 13 08:49:50.542487 containerd[1596]: time="2024-12-13T08:49:50.542178612Z" level=info msg="shim disconnected" id=2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346 namespace=k8s.io Dec 13 08:49:50.542487 containerd[1596]: time="2024-12-13T08:49:50.542297977Z" level=warning msg="cleaning up after shim disconnected" id=2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346 namespace=k8s.io Dec 13 08:49:50.542487 containerd[1596]: time="2024-12-13T08:49:50.542315252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:50.552734 containerd[1596]: time="2024-12-13T08:49:50.552676627Z" level=info msg="StopContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" returns successfully" Dec 13 08:49:50.556308 containerd[1596]: time="2024-12-13T08:49:50.553697697Z" level=info msg="StopPodSandbox for \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\"" Dec 13 08:49:50.556308 containerd[1596]: time="2024-12-13T08:49:50.553751875Z" level=info msg="Container to stop \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.562704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a-shm.mount: Deactivated successfully. Dec 13 08:49:50.590007 containerd[1596]: time="2024-12-13T08:49:50.589954124Z" level=info msg="StopContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" returns successfully" Dec 13 08:49:50.590608 containerd[1596]: time="2024-12-13T08:49:50.590572435Z" level=info msg="StopPodSandbox for \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\"" Dec 13 08:49:50.590702 containerd[1596]: time="2024-12-13T08:49:50.590629288Z" level=info msg="Container to stop \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.590702 containerd[1596]: time="2024-12-13T08:49:50.590646766Z" level=info msg="Container to stop \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.590702 containerd[1596]: time="2024-12-13T08:49:50.590661964Z" level=info msg="Container to stop \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.592455 containerd[1596]: time="2024-12-13T08:49:50.590678697Z" level=info msg="Container to stop \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.592455 containerd[1596]: time="2024-12-13T08:49:50.590740938Z" level=info msg="Container to stop \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 08:49:50.597776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad-shm.mount: Deactivated successfully. Dec 13 08:49:50.631993 containerd[1596]: time="2024-12-13T08:49:50.631885879Z" level=info msg="shim disconnected" id=4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a namespace=k8s.io Dec 13 08:49:50.631993 containerd[1596]: time="2024-12-13T08:49:50.631953270Z" level=warning msg="cleaning up after shim disconnected" id=4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a namespace=k8s.io Dec 13 08:49:50.631993 containerd[1596]: time="2024-12-13T08:49:50.631964765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:50.650927 containerd[1596]: time="2024-12-13T08:49:50.649541823Z" level=info msg="shim disconnected" id=eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad namespace=k8s.io Dec 13 08:49:50.650927 containerd[1596]: time="2024-12-13T08:49:50.649779895Z" level=warning msg="cleaning up after shim disconnected" id=eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad namespace=k8s.io Dec 13 08:49:50.650927 containerd[1596]: time="2024-12-13T08:49:50.649796518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:50.675853 containerd[1596]: time="2024-12-13T08:49:50.675793356Z" level=info msg="TearDown network for sandbox \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\" successfully" Dec 13 08:49:50.676116 containerd[1596]: time="2024-12-13T08:49:50.676087899Z" level=info msg="StopPodSandbox for \"4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a\" returns successfully" Dec 13 08:49:50.698018 containerd[1596]: time="2024-12-13T08:49:50.697937505Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:49:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:49:50.700012 containerd[1596]: time="2024-12-13T08:49:50.699932453Z" level=info msg="TearDown network for sandbox \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" successfully" Dec 13 08:49:50.700012 containerd[1596]: time="2024-12-13T08:49:50.699972346Z" level=info msg="StopPodSandbox for \"eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad\" returns successfully" Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840571 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-kernel\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840648 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hostproc\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840688 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c31d8e36-82d0-42c3-9aa1-11a73a25155c-clustermesh-secrets\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840719 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-lib-modules\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840750 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hubble-tls\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.841490 kubelet[2753]: I1213 08:49:50.840782 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8d1f803-cd52-4fcc-ae0a-660940990088-cilium-config-path\") pod \"d8d1f803-cd52-4fcc-ae0a-660940990088\" (UID: \"d8d1f803-cd52-4fcc-ae0a-660940990088\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.840824 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt8zq\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-kube-api-access-jt8zq\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.841026 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cni-path\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.841082 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-xtables-lock\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.841117 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-config-path\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.841163 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-bpf-maps\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842431 kubelet[2753]: I1213 08:49:50.841215 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-cgroup\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842707 kubelet[2753]: I1213 08:49:50.841245 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-etc-cni-netd\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842707 kubelet[2753]: I1213 08:49:50.841278 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-run\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.842707 kubelet[2753]: I1213 08:49:50.841321 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4qrt\" (UniqueName: \"kubernetes.io/projected/d8d1f803-cd52-4fcc-ae0a-660940990088-kube-api-access-r4qrt\") pod \"d8d1f803-cd52-4fcc-ae0a-660940990088\" (UID: \"d8d1f803-cd52-4fcc-ae0a-660940990088\") " Dec 13 08:49:50.842707 kubelet[2753]: I1213 08:49:50.841352 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-net\") pod \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\" (UID: \"c31d8e36-82d0-42c3-9aa1-11a73a25155c\") " Dec 13 08:49:50.844931 kubelet[2753]: I1213 08:49:50.842916 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.844931 kubelet[2753]: I1213 08:49:50.844423 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hostproc" (OuterVolumeSpecName: "hostproc") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.844931 kubelet[2753]: I1213 08:49:50.844431 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.844931 kubelet[2753]: I1213 08:49:50.844515 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.844931 kubelet[2753]: I1213 08:49:50.844543 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.849673 kubelet[2753]: I1213 08:49:50.849618 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31d8e36-82d0-42c3-9aa1-11a73a25155c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 08:49:50.850543 kubelet[2753]: I1213 08:49:50.849686 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 08:49:50.853747 kubelet[2753]: I1213 08:49:50.853683 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d1f803-cd52-4fcc-ae0a-660940990088-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8d1f803-cd52-4fcc-ae0a-660940990088" (UID: "d8d1f803-cd52-4fcc-ae0a-660940990088"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 08:49:50.853993 kubelet[2753]: I1213 08:49:50.853961 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 08:49:50.854134 kubelet[2753]: I1213 08:49:50.854117 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.854271 kubelet[2753]: I1213 08:49:50.854253 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.854394 kubelet[2753]: I1213 08:49:50.854378 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.854495 kubelet[2753]: I1213 08:49:50.854479 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.857925 kubelet[2753]: I1213 08:49:50.857830 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-kube-api-access-jt8zq" (OuterVolumeSpecName: "kube-api-access-jt8zq") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "kube-api-access-jt8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 08:49:50.858571 kubelet[2753]: I1213 08:49:50.858462 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d1f803-cd52-4fcc-ae0a-660940990088-kube-api-access-r4qrt" (OuterVolumeSpecName: "kube-api-access-r4qrt") pod "d8d1f803-cd52-4fcc-ae0a-660940990088" (UID: "d8d1f803-cd52-4fcc-ae0a-660940990088"). InnerVolumeSpecName "kube-api-access-r4qrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 08:49:50.858571 kubelet[2753]: I1213 08:49:50.858529 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cni-path" (OuterVolumeSpecName: "cni-path") pod "c31d8e36-82d0-42c3-9aa1-11a73a25155c" (UID: "c31d8e36-82d0-42c3-9aa1-11a73a25155c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 08:49:50.882874 kubelet[2753]: I1213 08:49:50.882717 2753 scope.go:117] "RemoveContainer" containerID="2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346" Dec 13 08:49:50.903349 containerd[1596]: time="2024-12-13T08:49:50.902132729Z" level=info msg="RemoveContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\"" Dec 13 08:49:50.912693 containerd[1596]: time="2024-12-13T08:49:50.912302597Z" level=info msg="RemoveContainer for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" returns successfully" Dec 13 08:49:50.920613 kubelet[2753]: I1213 08:49:50.920504 2753 scope.go:117] "RemoveContainer" containerID="c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2" Dec 13 08:49:50.923855 containerd[1596]: time="2024-12-13T08:49:50.923765823Z" level=info msg="RemoveContainer for \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\"" Dec 13 08:49:50.928522 containerd[1596]: time="2024-12-13T08:49:50.928471281Z" level=info msg="RemoveContainer for \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\" returns successfully" Dec 13 08:49:50.929848 kubelet[2753]: I1213 08:49:50.929757 2753 scope.go:117] "RemoveContainer" containerID="1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e" Dec 13 08:49:50.934512 containerd[1596]: time="2024-12-13T08:49:50.934464708Z" level=info msg="RemoveContainer for \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\"" Dec 13 08:49:50.940582 containerd[1596]: time="2024-12-13T08:49:50.939307048Z" level=info msg="RemoveContainer for \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\" returns successfully" Dec 13 08:49:50.941402 kubelet[2753]: I1213 08:49:50.941101 2753 scope.go:117] "RemoveContainer" containerID="06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944352 2753 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r4qrt\" (UniqueName: \"kubernetes.io/projected/d8d1f803-cd52-4fcc-ae0a-660940990088-kube-api-access-r4qrt\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944393 2753 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-etc-cni-netd\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944413 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-run\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944433 2753 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-net\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944451 2753 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-host-proc-sys-kernel\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944468 2753 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hostproc\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944485 2753 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c31d8e36-82d0-42c3-9aa1-11a73a25155c-clustermesh-secrets\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.944711 kubelet[2753]: I1213 08:49:50.944501 2753 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-lib-modules\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.944517 2753 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-hubble-tls\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.944535 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8d1f803-cd52-4fcc-ae0a-660940990088-cilium-config-path\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.944552 2753 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jt8zq\" (UniqueName: \"kubernetes.io/projected/c31d8e36-82d0-42c3-9aa1-11a73a25155c-kube-api-access-jt8zq\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.944569 2753 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cni-path\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.945979 2753 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-xtables-lock\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.946034 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-config-path\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.946052 2753 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-bpf-maps\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.946401 kubelet[2753]: I1213 08:49:50.946071 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c31d8e36-82d0-42c3-9aa1-11a73a25155c-cilium-cgroup\") on node \"ci-4081.2.1-b-2d211b5e28\" DevicePath \"\"" Dec 13 08:49:50.947106 containerd[1596]: time="2024-12-13T08:49:50.945358714Z" level=info msg="RemoveContainer for \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\"" Dec 13 08:49:50.950793 containerd[1596]: time="2024-12-13T08:49:50.950741737Z" level=info msg="RemoveContainer for \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\" returns successfully" Dec 13 08:49:50.951394 kubelet[2753]: I1213 08:49:50.951322 2753 scope.go:117] "RemoveContainer" containerID="6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a" Dec 13 08:49:50.954453 containerd[1596]: time="2024-12-13T08:49:50.953970440Z" level=info msg="RemoveContainer for \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\"" Dec 13 08:49:50.964317 containerd[1596]: time="2024-12-13T08:49:50.964267412Z" level=info msg="RemoveContainer for \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\" returns successfully" Dec 13 08:49:50.964932 kubelet[2753]: I1213 08:49:50.964908 2753 scope.go:117] "RemoveContainer" containerID="2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346" Dec 13 08:49:50.977202 containerd[1596]: time="2024-12-13T08:49:50.965383259Z" level=error msg="ContainerStatus for \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\": not found" Dec 13 08:49:50.992458 kubelet[2753]: E1213 08:49:50.992281 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\": not found" containerID="2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346" Dec 13 08:49:51.017861 kubelet[2753]: I1213 08:49:51.017779 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346"} err="failed to get container status \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a150917883725a126bd9c168e3807aa5c0692ca24175f88219f8d8e0dded346\": not found" Dec 13 08:49:51.017861 kubelet[2753]: I1213 08:49:51.017864 2753 scope.go:117] "RemoveContainer" containerID="c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2" Dec 13 08:49:51.018460 containerd[1596]: time="2024-12-13T08:49:51.018400570Z" level=error msg="ContainerStatus for \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\": not found" Dec 13 08:49:51.019012 kubelet[2753]: E1213 08:49:51.018769 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\": not found" containerID="c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2" Dec 13 08:49:51.019012 kubelet[2753]: I1213 08:49:51.018846 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2"} err="failed to get container status \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2da7886f5cccfb532bfc6036e476beea944e9d83f22d6f52a45e28d01f7b6a2\": not found" Dec 13 08:49:51.019012 kubelet[2753]: I1213 08:49:51.018869 2753 scope.go:117] "RemoveContainer" containerID="1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e" Dec 13 08:49:51.019725 containerd[1596]: time="2024-12-13T08:49:51.019616406Z" level=error msg="ContainerStatus for \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\": not found" Dec 13 08:49:51.019833 kubelet[2753]: E1213 08:49:51.019819 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\": not found" containerID="1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e" Dec 13 08:49:51.019926 kubelet[2753]: I1213 08:49:51.019884 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e"} err="failed to get container status \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f84272f7bf096944c6c22c12bcec78a92d4c7e5d97fa49db8da1e36bdf0014e\": not found" Dec 13 08:49:51.019926 kubelet[2753]: I1213 08:49:51.019905 2753 scope.go:117] "RemoveContainer" containerID="06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db" Dec 13 08:49:51.020282 containerd[1596]: time="2024-12-13T08:49:51.020165291Z" level=error msg="ContainerStatus for \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\": not found" Dec 13 08:49:51.020437 kubelet[2753]: E1213 08:49:51.020416 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\": not found" containerID="06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db" Dec 13 08:49:51.020549 kubelet[2753]: I1213 08:49:51.020452 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db"} err="failed to get container status \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\": rpc error: code = NotFound desc = an error occurred when try to find container \"06b4ac1bc736827e901d3b99b1704266d4a9d02b60b70ddd85017f38ec2df0db\": not found" Dec 13 08:49:51.020549 kubelet[2753]: I1213 08:49:51.020467 2753 scope.go:117] "RemoveContainer" containerID="6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a" Dec 13 08:49:51.020982 containerd[1596]: time="2024-12-13T08:49:51.020854653Z" level=error msg="ContainerStatus for \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\": not found" Dec 13 08:49:51.021362 kubelet[2753]: E1213 08:49:51.021074 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\": not found" containerID="6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a" Dec 13 08:49:51.021362 kubelet[2753]: I1213 08:49:51.021110 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a"} err="failed to get container status \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bf8630a974cb304b2da9256f53049d61184638f12f9c9456bd0707a6623dd9a\": not found" Dec 13 08:49:51.021362 kubelet[2753]: I1213 08:49:51.021126 2753 scope.go:117] "RemoveContainer" containerID="9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8" Dec 13 08:49:51.023088 containerd[1596]: time="2024-12-13T08:49:51.022891118Z" level=info msg="RemoveContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\"" Dec 13 08:49:51.027554 containerd[1596]: time="2024-12-13T08:49:51.027428461Z" level=info msg="RemoveContainer for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" returns successfully" Dec 13 08:49:51.028283 kubelet[2753]: I1213 08:49:51.027821 2753 scope.go:117] "RemoveContainer" containerID="9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8" Dec 13 08:49:51.028561 containerd[1596]: time="2024-12-13T08:49:51.028475669Z" level=error msg="ContainerStatus for \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\": not found" Dec 13 08:49:51.028848 kubelet[2753]: E1213 08:49:51.028711 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\": not found" containerID="9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8" Dec 13 08:49:51.028848 kubelet[2753]: I1213 08:49:51.028764 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8"} err="failed to get container status \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b62bef85be6d7ade5f640b0df21c03ed8523b1697cc6fd9c5d87b81f17260e8\": not found" Dec 13 08:49:51.346910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4df198b76a1339cc3b858aaf6faa53f52347c8e45e30d9d89980e4aa5d10453a-rootfs.mount: Deactivated successfully. Dec 13 08:49:51.347168 systemd[1]: var-lib-kubelet-pods-d8d1f803\x2dcd52\x2d4fcc\x2dae0a\x2d660940990088-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4qrt.mount: Deactivated successfully. Dec 13 08:49:51.347421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb464b3d205b9dfeb9c827ddfc7db3b09a44803c6b3bb6873eb6a88fe71409ad-rootfs.mount: Deactivated successfully. Dec 13 08:49:51.347581 systemd[1]: var-lib-kubelet-pods-c31d8e36\x2d82d0\x2d42c3\x2d9aa1\x2d11a73a25155c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djt8zq.mount: Deactivated successfully. Dec 13 08:49:51.347736 systemd[1]: var-lib-kubelet-pods-c31d8e36\x2d82d0\x2d42c3\x2d9aa1\x2d11a73a25155c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 08:49:51.347909 systemd[1]: var-lib-kubelet-pods-c31d8e36\x2d82d0\x2d42c3\x2d9aa1\x2d11a73a25155c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 08:49:52.191592 sshd[4402]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:52.201734 systemd[1]: Started sshd@30-143.198.66.7:22-147.75.109.163:40182.service - OpenSSH per-connection server daemon (147.75.109.163:40182). Dec 13 08:49:52.202581 systemd[1]: sshd@29-143.198.66.7:22-147.75.109.163:40172.service: Deactivated successfully. Dec 13 08:49:52.212102 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 08:49:52.214784 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Dec 13 08:49:52.218369 systemd-logind[1573]: Removed session 25. Dec 13 08:49:52.269142 sshd[4566]: Accepted publickey for core from 147.75.109.163 port 40182 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:52.271443 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:52.280684 systemd-logind[1573]: New session 26 of user core. Dec 13 08:49:52.286859 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 08:49:52.455708 kubelet[2753]: I1213 08:49:52.455579 2753 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" path="/var/lib/kubelet/pods/c31d8e36-82d0-42c3-9aa1-11a73a25155c/volumes" Dec 13 08:49:52.456888 kubelet[2753]: I1213 08:49:52.456442 2753 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d8d1f803-cd52-4fcc-ae0a-660940990088" path="/var/lib/kubelet/pods/d8d1f803-cd52-4fcc-ae0a-660940990088/volumes" Dec 13 08:49:52.641135 kubelet[2753]: E1213 08:49:52.641031 2753 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 08:49:53.366946 sshd[4566]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:53.383593 systemd[1]: Started sshd@31-143.198.66.7:22-147.75.109.163:40188.service - OpenSSH per-connection server daemon (147.75.109.163:40188). Dec 13 08:49:53.385536 systemd[1]: sshd@30-143.198.66.7:22-147.75.109.163:40182.service: Deactivated successfully. Dec 13 08:49:53.394090 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 08:49:53.399924 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Dec 13 08:49:53.407612 systemd-logind[1573]: Removed session 26. Dec 13 08:49:53.460673 kubelet[2753]: I1213 08:49:53.460610 2753 topology_manager.go:215] "Topology Admit Handler" podUID="650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996" podNamespace="kube-system" podName="cilium-9zzzz" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460731 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="mount-cgroup" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460749 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="mount-bpf-fs" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460763 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="cilium-agent" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460779 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="apply-sysctl-overwrites" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460791 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8d1f803-cd52-4fcc-ae0a-660940990088" containerName="cilium-operator" Dec 13 08:49:53.461346 kubelet[2753]: E1213 08:49:53.460806 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="clean-cilium-state" Dec 13 08:49:53.461346 kubelet[2753]: I1213 08:49:53.460868 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31d8e36-82d0-42c3-9aa1-11a73a25155c" containerName="cilium-agent" Dec 13 08:49:53.461346 kubelet[2753]: I1213 08:49:53.460883 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d1f803-cd52-4fcc-ae0a-660940990088" containerName="cilium-operator" Dec 13 08:49:53.504514 sshd[4579]: Accepted publickey for core from 147.75.109.163 port 40188 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:53.506913 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:53.520269 systemd-logind[1573]: New session 27 of user core. Dec 13 08:49:53.526802 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 08:49:53.570553 kubelet[2753]: I1213 08:49:53.570487 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-cilium-run\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.572782 kubelet[2753]: I1213 08:49:53.572684 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-bpf-maps\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.572964 kubelet[2753]: I1213 08:49:53.572815 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-etc-cni-netd\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.572964 kubelet[2753]: I1213 08:49:53.572872 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-cilium-config-path\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.572964 kubelet[2753]: I1213 08:49:53.572911 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-hubble-tls\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.572964 kubelet[2753]: I1213 08:49:53.572943 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-xtables-lock\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573328 kubelet[2753]: I1213 08:49:53.572985 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-cilium-ipsec-secrets\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573328 kubelet[2753]: I1213 08:49:53.573023 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-host-proc-sys-net\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573328 kubelet[2753]: I1213 08:49:53.573056 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-cilium-cgroup\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573328 kubelet[2753]: I1213 08:49:53.573087 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-lib-modules\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573328 kubelet[2753]: I1213 08:49:53.573124 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-host-proc-sys-kernel\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573620 kubelet[2753]: I1213 08:49:53.573156 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-clustermesh-secrets\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573620 kubelet[2753]: I1213 08:49:53.573256 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdp9s\" (UniqueName: \"kubernetes.io/projected/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-kube-api-access-jdp9s\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573620 kubelet[2753]: I1213 08:49:53.573290 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-hostproc\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.573620 kubelet[2753]: I1213 08:49:53.573320 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996-cni-path\") pod \"cilium-9zzzz\" (UID: \"650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996\") " pod="kube-system/cilium-9zzzz" Dec 13 08:49:53.595955 sshd[4579]: pam_unix(sshd:session): session closed for user core Dec 13 08:49:53.605771 systemd[1]: Started sshd@32-143.198.66.7:22-147.75.109.163:40198.service - OpenSSH per-connection server daemon (147.75.109.163:40198). Dec 13 08:49:53.607673 systemd[1]: sshd@31-143.198.66.7:22-147.75.109.163:40188.service: Deactivated successfully. Dec 13 08:49:53.615909 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 08:49:53.618753 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. Dec 13 08:49:53.624727 systemd-logind[1573]: Removed session 27. Dec 13 08:49:53.669313 sshd[4590]: Accepted publickey for core from 147.75.109.163 port 40198 ssh2: RSA SHA256:GmRBCjv5DLbtT++ktFQz5R9M6+onrAQ9dTcgZ+NRPZM Dec 13 08:49:53.671695 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:49:53.683371 systemd-logind[1573]: New session 28 of user core. Dec 13 08:49:53.695735 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 08:49:53.789412 kubelet[2753]: E1213 08:49:53.789371 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:53.793163 containerd[1596]: time="2024-12-13T08:49:53.792506956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zzzz,Uid:650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996,Namespace:kube-system,Attempt:0,}" Dec 13 08:49:53.859363 containerd[1596]: time="2024-12-13T08:49:53.858462114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:49:53.859363 containerd[1596]: time="2024-12-13T08:49:53.858563178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:49:53.859363 containerd[1596]: time="2024-12-13T08:49:53.858592015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:53.859363 containerd[1596]: time="2024-12-13T08:49:53.858777488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:49:53.939496 containerd[1596]: time="2024-12-13T08:49:53.938952224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zzzz,Uid:650a7cc4-1ee9-4e40-ab7f-e6d0fb1b8996,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\"" Dec 13 08:49:53.943122 kubelet[2753]: E1213 08:49:53.942882 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:53.952225 containerd[1596]: time="2024-12-13T08:49:53.951993637Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 08:49:53.974210 containerd[1596]: time="2024-12-13T08:49:53.973958510Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9f438350deb5b4fea6c9d7720851e6d566848af0d2ea2357b1d26ff12e4656c\"" Dec 13 08:49:53.975936 containerd[1596]: time="2024-12-13T08:49:53.974906566Z" level=info msg="StartContainer for \"b9f438350deb5b4fea6c9d7720851e6d566848af0d2ea2357b1d26ff12e4656c\"" Dec 13 08:49:54.064968 containerd[1596]: time="2024-12-13T08:49:54.064845265Z" level=info msg="StartContainer for \"b9f438350deb5b4fea6c9d7720851e6d566848af0d2ea2357b1d26ff12e4656c\" returns successfully" Dec 13 08:49:54.187259 containerd[1596]: time="2024-12-13T08:49:54.187035317Z" level=info msg="shim disconnected" id=b9f438350deb5b4fea6c9d7720851e6d566848af0d2ea2357b1d26ff12e4656c namespace=k8s.io Dec 13 08:49:54.187259 containerd[1596]: time="2024-12-13T08:49:54.187122028Z" level=warning msg="cleaning up after shim disconnected" id=b9f438350deb5b4fea6c9d7720851e6d566848af0d2ea2357b1d26ff12e4656c namespace=k8s.io Dec 13 08:49:54.187259 containerd[1596]: time="2024-12-13T08:49:54.187134150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:54.493628 kubelet[2753]: I1213 08:49:54.493593 2753 setters.go:568] "Node became not ready" node="ci-4081.2.1-b-2d211b5e28" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T08:49:54Z","lastTransitionTime":"2024-12-13T08:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 08:49:54.924726 kubelet[2753]: E1213 08:49:54.924112 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:54.930051 containerd[1596]: time="2024-12-13T08:49:54.929661724Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 08:49:54.950721 containerd[1596]: time="2024-12-13T08:49:54.950060569Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283\"" Dec 13 08:49:54.954290 containerd[1596]: time="2024-12-13T08:49:54.953353462Z" level=info msg="StartContainer for \"72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283\"" Dec 13 08:49:55.033433 containerd[1596]: time="2024-12-13T08:49:55.033368603Z" level=info msg="StartContainer for \"72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283\" returns successfully" Dec 13 08:49:55.096851 containerd[1596]: time="2024-12-13T08:49:55.096784878Z" level=info msg="shim disconnected" id=72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283 namespace=k8s.io Dec 13 08:49:55.097328 containerd[1596]: time="2024-12-13T08:49:55.097081466Z" level=warning msg="cleaning up after shim disconnected" id=72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283 namespace=k8s.io Dec 13 08:49:55.097328 containerd[1596]: time="2024-12-13T08:49:55.097097406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:55.119260 containerd[1596]: time="2024-12-13T08:49:55.117776842Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:49:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:49:55.699767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e4d6388cd28b4ada0190a9010d71347a644bc4498d4f6f2ff87c4ff8f45283-rootfs.mount: Deactivated successfully. Dec 13 08:49:55.930408 kubelet[2753]: E1213 08:49:55.930145 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:55.934246 containerd[1596]: time="2024-12-13T08:49:55.933594542Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 08:49:55.962400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580540370.mount: Deactivated successfully. Dec 13 08:49:55.967431 containerd[1596]: time="2024-12-13T08:49:55.967288277Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a\"" Dec 13 08:49:55.970530 containerd[1596]: time="2024-12-13T08:49:55.969141390Z" level=info msg="StartContainer for \"1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a\"" Dec 13 08:49:56.073226 containerd[1596]: time="2024-12-13T08:49:56.073094629Z" level=info msg="StartContainer for \"1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a\" returns successfully" Dec 13 08:49:56.123159 containerd[1596]: time="2024-12-13T08:49:56.122972932Z" level=info msg="shim disconnected" id=1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a namespace=k8s.io Dec 13 08:49:56.123159 containerd[1596]: time="2024-12-13T08:49:56.123054841Z" level=warning msg="cleaning up after shim disconnected" id=1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a namespace=k8s.io Dec 13 08:49:56.123159 containerd[1596]: time="2024-12-13T08:49:56.123066568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:56.700041 systemd[1]: run-containerd-runc-k8s.io-1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a-runc.nI1r6R.mount: Deactivated successfully. Dec 13 08:49:56.700269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e407854aadd72cff74a17d972beef10a897f68fd6dc1c1d9b41c6bf2e87b78a-rootfs.mount: Deactivated successfully. Dec 13 08:49:56.935588 kubelet[2753]: E1213 08:49:56.935533 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:56.941937 containerd[1596]: time="2024-12-13T08:49:56.941785028Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 08:49:56.974629 containerd[1596]: time="2024-12-13T08:49:56.973079441Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8\"" Dec 13 08:49:56.974629 containerd[1596]: time="2024-12-13T08:49:56.974300075Z" level=info msg="StartContainer for \"46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8\"" Dec 13 08:49:57.058304 containerd[1596]: time="2024-12-13T08:49:57.058026504Z" level=info msg="StartContainer for \"46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8\" returns successfully" Dec 13 08:49:57.093221 containerd[1596]: time="2024-12-13T08:49:57.092893106Z" level=info msg="shim disconnected" id=46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8 namespace=k8s.io Dec 13 08:49:57.093221 containerd[1596]: time="2024-12-13T08:49:57.093006761Z" level=warning msg="cleaning up after shim disconnected" id=46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8 namespace=k8s.io Dec 13 08:49:57.093221 containerd[1596]: time="2024-12-13T08:49:57.093022017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:49:57.643044 kubelet[2753]: E1213 08:49:57.642989 2753 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 08:49:57.700362 systemd[1]: run-containerd-runc-k8s.io-46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8-runc.buozjV.mount: Deactivated successfully. Dec 13 08:49:57.700611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46855fabae6573570059ebde8a663dd2846e971065a1a892259941d2dd7471b8-rootfs.mount: Deactivated successfully. Dec 13 08:49:57.941753 kubelet[2753]: E1213 08:49:57.940739 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:57.949356 containerd[1596]: time="2024-12-13T08:49:57.947698194Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 08:49:57.971177 containerd[1596]: time="2024-12-13T08:49:57.969850333Z" level=info msg="CreateContainer within sandbox \"2c5a97549fca36dad323b69688c7ca4a31b3b14851be6bae61845f74bbfb5aa4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032\"" Dec 13 08:49:57.974581 containerd[1596]: time="2024-12-13T08:49:57.972400637Z" level=info msg="StartContainer for \"1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032\"" Dec 13 08:49:58.062870 containerd[1596]: time="2024-12-13T08:49:58.062701872Z" level=info msg="StartContainer for \"1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032\" returns successfully" Dec 13 08:49:58.596133 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 08:49:58.702581 systemd[1]: run-containerd-runc-k8s.io-1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032-runc.gpfIAl.mount: Deactivated successfully. Dec 13 08:49:58.949173 kubelet[2753]: E1213 08:49:58.946585 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:49:59.951125 kubelet[2753]: E1213 08:49:59.951010 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:50:02.912300 systemd[1]: run-containerd-runc-k8s.io-1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032-runc.Wc7Mxk.mount: Deactivated successfully. Dec 13 08:50:03.523434 systemd-networkd[1219]: lxc_health: Link UP Dec 13 08:50:03.525905 systemd-networkd[1219]: lxc_health: Gained carrier Dec 13 08:50:03.806361 kubelet[2753]: E1213 08:50:03.805671 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:50:03.856235 kubelet[2753]: I1213 08:50:03.853521 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9zzzz" podStartSLOduration=10.853466228 podStartE2EDuration="10.853466228s" podCreationTimestamp="2024-12-13 08:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:49:58.975775285 +0000 UTC m=+106.738454269" watchObservedRunningTime="2024-12-13 08:50:03.853466228 +0000 UTC m=+111.616145214" Dec 13 08:50:03.968268 kubelet[2753]: E1213 08:50:03.968155 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 08:50:05.142305 systemd-networkd[1219]: lxc_health: Gained IPv6LL Dec 13 08:50:09.709248 systemd[1]: run-containerd-runc-k8s.io-1b39528fb6fc298cb6dc17c3ef1250f0df65308ec6b72b18adfa0ed194cc7032-runc.MD24UI.mount: Deactivated successfully. Dec 13 08:50:09.803012 sshd[4590]: pam_unix(sshd:session): session closed for user core Dec 13 08:50:09.809067 systemd-logind[1573]: Session 28 logged out. Waiting for processes to exit. Dec 13 08:50:09.809325 systemd[1]: sshd@32-143.198.66.7:22-147.75.109.163:40198.service: Deactivated successfully. Dec 13 08:50:09.821615 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 08:50:09.827327 systemd-logind[1573]: Removed session 28.