Nov 12 20:51:53.166739 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:51:53.166785 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:51:53.166805 kernel: BIOS-provided physical RAM map: Nov 12 20:51:53.166815 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:51:53.166823 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:51:53.166833 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:51:53.166845 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 12 20:51:53.166856 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 12 20:51:53.166867 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:51:53.166883 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:51:53.166906 kernel: NX (Execute Disable) protection: active Nov 12 20:51:53.166918 kernel: APIC: Static calls initialized Nov 12 20:51:53.166930 kernel: SMBIOS 2.8 present. Nov 12 20:51:53.166943 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 12 20:51:53.166959 kernel: Hypervisor detected: KVM Nov 12 20:51:53.166976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:51:53.166989 kernel: kvm-clock: using sched offset of 4082208750 cycles Nov 12 20:51:53.167009 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:51:53.167023 kernel: tsc: Detected 1999.997 MHz processor Nov 12 20:51:53.167037 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:51:53.167051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:51:53.167065 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 12 20:51:53.167079 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:51:53.167093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:51:53.167110 kernel: ACPI: Early table checksum verification disabled Nov 12 20:51:53.167123 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 12 20:51:53.167137 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167151 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167165 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167178 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 12 20:51:53.167192 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167205 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167219 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167235 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:51:53.167249 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 12 20:51:53.167263 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 12 20:51:53.167276 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 12 20:51:53.167290 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 12 20:51:53.167303 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 12 20:51:53.167317 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 12 20:51:53.167345 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 12 20:51:53.167360 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:51:53.167374 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:51:53.167389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:51:53.167405 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:51:53.167420 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 12 20:51:53.167435 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 12 20:51:53.167452 kernel: Zone ranges: Nov 12 20:51:53.167467 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:51:53.167482 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 12 20:51:53.167496 kernel: Normal empty Nov 12 20:51:53.167510 kernel: Movable zone start for each node Nov 12 20:51:53.167525 kernel: Early memory node ranges Nov 12 20:51:53.167539 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:51:53.167554 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 12 20:51:53.167568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 12 20:51:53.167586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:51:53.167607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:51:53.169686 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 12 20:51:53.169716 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:51:53.169727 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:51:53.169739 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:51:53.169751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:51:53.169762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:51:53.169773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:51:53.169793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:51:53.169806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:51:53.169818 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:51:53.169833 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:51:53.169846 kernel: TSC deadline timer available Nov 12 20:51:53.169859 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:51:53.169870 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:51:53.169882 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 12 20:51:53.169893 kernel: Booting paravirtualized kernel on KVM Nov 12 20:51:53.169920 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:51:53.169936 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:51:53.169948 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:51:53.169959 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:51:53.169971 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:51:53.169983 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 12 20:51:53.169998 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:51:53.170011 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:51:53.170026 kernel: random: crng init done Nov 12 20:51:53.170037 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:51:53.170048 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:51:53.170060 kernel: Fallback order for Node 0: 0 Nov 12 20:51:53.170073 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 12 20:51:53.170084 kernel: Policy zone: DMA32 Nov 12 20:51:53.170095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:51:53.170107 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Nov 12 20:51:53.170120 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:51:53.170135 kernel: Kernel/User page tables isolation: enabled Nov 12 20:51:53.170147 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:51:53.170160 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:51:53.170171 kernel: Dynamic Preempt: voluntary Nov 12 20:51:53.170184 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:51:53.170196 kernel: rcu: RCU event tracing is enabled. Nov 12 20:51:53.170208 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:51:53.170220 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:51:53.170233 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:51:53.170251 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:51:53.170264 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:51:53.170277 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:51:53.170290 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:51:53.170311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:51:53.170323 kernel: Console: colour VGA+ 80x25 Nov 12 20:51:53.170335 kernel: printk: console [tty0] enabled Nov 12 20:51:53.170347 kernel: printk: console [ttyS0] enabled Nov 12 20:51:53.170359 kernel: ACPI: Core revision 20230628 Nov 12 20:51:53.170371 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:51:53.170386 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:51:53.170398 kernel: x2apic enabled Nov 12 20:51:53.170411 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:51:53.170423 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:51:53.170435 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Nov 12 20:51:53.170447 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Nov 12 20:51:53.170461 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 12 20:51:53.170474 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 12 20:51:53.170504 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:51:53.170517 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:51:53.170531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:51:53.170548 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:51:53.170563 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 12 20:51:53.170577 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:51:53.170591 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:51:53.170604 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:51:53.170795 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:51:53.170828 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:51:53.170844 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:51:53.170858 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:51:53.170872 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:51:53.170886 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:51:53.170902 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:51:53.170915 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:51:53.170929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:51:53.170947 kernel: landlock: Up and running. Nov 12 20:51:53.170961 kernel: SELinux: Initializing. Nov 12 20:51:53.170976 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:51:53.170990 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:51:53.171004 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 12 20:51:53.171017 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:51:53.171033 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:51:53.171048 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:51:53.171062 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 12 20:51:53.171079 kernel: signal: max sigframe size: 1776 Nov 12 20:51:53.171094 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:51:53.171109 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:51:53.171123 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:51:53.171137 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:51:53.171151 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:51:53.171170 kernel: .... node #0, CPUs: #1 Nov 12 20:51:53.171184 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:51:53.171199 kernel: smpboot: Max logical packages: 1 Nov 12 20:51:53.171217 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Nov 12 20:51:53.171231 kernel: devtmpfs: initialized Nov 12 20:51:53.171245 kernel: x86/mm: Memory block size: 128MB Nov 12 20:51:53.171260 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:51:53.171274 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:51:53.171287 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:51:53.171303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:51:53.171317 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:51:53.171331 kernel: audit: type=2000 audit(1731444710.818:1): state=initialized audit_enabled=0 res=1 Nov 12 20:51:53.171349 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:51:53.171364 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:51:53.171379 kernel: cpuidle: using governor menu Nov 12 20:51:53.171392 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:51:53.171406 kernel: dca service started, version 1.12.1 Nov 12 20:51:53.171421 kernel: PCI: Using configuration type 1 for base access Nov 12 20:51:53.171435 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:51:53.171449 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:51:53.171463 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:51:53.171481 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:51:53.171495 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:51:53.171509 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:51:53.171524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:51:53.171538 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:51:53.171553 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:51:53.171567 kernel: ACPI: Interpreter enabled Nov 12 20:51:53.171582 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:51:53.171597 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:51:53.171614 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:51:53.172536 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:51:53.172552 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 12 20:51:53.172566 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:51:53.172910 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:51:53.173092 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:51:53.173262 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:51:53.173289 kernel: acpiphp: Slot [3] registered Nov 12 20:51:53.173303 kernel: acpiphp: Slot [4] registered Nov 12 20:51:53.173317 kernel: acpiphp: Slot [5] registered Nov 12 20:51:53.173329 kernel: acpiphp: Slot [6] registered Nov 12 20:51:53.173343 kernel: acpiphp: Slot [7] registered Nov 12 20:51:53.173357 kernel: acpiphp: Slot [8] registered Nov 12 20:51:53.173371 kernel: acpiphp: Slot [9] registered Nov 12 20:51:53.173385 kernel: acpiphp: Slot [10] registered Nov 12 20:51:53.173398 kernel: acpiphp: Slot [11] registered Nov 12 20:51:53.173416 kernel: acpiphp: Slot [12] registered Nov 12 20:51:53.173430 kernel: acpiphp: Slot [13] registered Nov 12 20:51:53.173444 kernel: acpiphp: Slot [14] registered Nov 12 20:51:53.173459 kernel: acpiphp: Slot [15] registered Nov 12 20:51:53.173473 kernel: acpiphp: Slot [16] registered Nov 12 20:51:53.173488 kernel: acpiphp: Slot [17] registered Nov 12 20:51:53.173503 kernel: acpiphp: Slot [18] registered Nov 12 20:51:53.173517 kernel: acpiphp: Slot [19] registered Nov 12 20:51:53.173530 kernel: acpiphp: Slot [20] registered Nov 12 20:51:53.173544 kernel: acpiphp: Slot [21] registered Nov 12 20:51:53.173563 kernel: acpiphp: Slot [22] registered Nov 12 20:51:53.173578 kernel: acpiphp: Slot [23] registered Nov 12 20:51:53.173593 kernel: acpiphp: Slot [24] registered Nov 12 20:51:53.173607 kernel: acpiphp: Slot [25] registered Nov 12 20:51:53.173647 kernel: acpiphp: Slot [26] registered Nov 12 20:51:53.173659 kernel: acpiphp: Slot [27] registered Nov 12 20:51:53.173672 kernel: acpiphp: Slot [28] registered Nov 12 20:51:53.173684 kernel: acpiphp: Slot [29] registered Nov 12 20:51:53.173696 kernel: acpiphp: Slot [30] registered Nov 12 20:51:53.173709 kernel: acpiphp: Slot [31] registered Nov 12 20:51:53.173727 kernel: PCI host bridge to bus 0000:00 Nov 12 20:51:53.173931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:51:53.174064 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:51:53.174190 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:51:53.174313 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:51:53.174435 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 12 20:51:53.174553 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:51:53.174796 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:51:53.174959 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:51:53.175114 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 12 20:51:53.175265 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 12 20:51:53.175403 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:51:53.175539 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:51:53.175705 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:51:53.175840 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:51:53.176002 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 12 20:51:53.176138 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 12 20:51:53.176345 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:51:53.176482 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 12 20:51:53.176645 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 12 20:51:53.176840 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 12 20:51:53.177399 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 12 20:51:53.177560 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 12 20:51:53.177747 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 12 20:51:53.177887 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:51:53.178030 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:51:53.178214 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:51:53.178353 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 12 20:51:53.178500 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 12 20:51:53.178660 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 12 20:51:53.178833 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:51:53.178978 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 12 20:51:53.179122 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 12 20:51:53.179271 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 12 20:51:53.179454 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 12 20:51:53.179602 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 12 20:51:53.179758 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 12 20:51:53.179912 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 12 20:51:53.180070 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:51:53.180208 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:51:53.180349 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 12 20:51:53.180508 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 12 20:51:53.180764 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:51:53.180902 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 12 20:51:53.181033 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 12 20:51:53.181163 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 12 20:51:53.181334 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 12 20:51:53.181474 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 12 20:51:53.181605 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 12 20:51:53.181634 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:51:53.181649 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:51:53.181663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:51:53.181677 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:51:53.181692 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:51:53.181710 kernel: iommu: Default domain type: Translated Nov 12 20:51:53.181724 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:51:53.181738 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:51:53.181752 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:51:53.181766 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:51:53.181780 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 12 20:51:53.181976 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 12 20:51:53.182124 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 12 20:51:53.182262 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:51:53.182288 kernel: vgaarb: loaded Nov 12 20:51:53.182301 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:51:53.182316 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:51:53.182329 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:51:53.182342 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:51:53.182354 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:51:53.182366 kernel: pnp: PnP ACPI init Nov 12 20:51:53.182379 kernel: pnp: PnP ACPI: found 4 devices Nov 12 20:51:53.182392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:51:53.182407 kernel: NET: Registered PF_INET protocol family Nov 12 20:51:53.182419 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:51:53.182430 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:51:53.182443 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:51:53.182455 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:51:53.182467 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:51:53.182480 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:51:53.182493 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:51:53.182510 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:51:53.182523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:51:53.182536 kernel: NET: Registered PF_XDP protocol family Nov 12 20:51:53.182791 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:51:53.182927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:51:53.183052 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:51:53.183184 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:51:53.183352 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 12 20:51:53.183505 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 12 20:51:53.183685 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:51:53.183707 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:51:53.183861 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 46629 usecs Nov 12 20:51:53.183882 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:51:53.183899 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:51:53.183912 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Nov 12 20:51:53.183926 kernel: Initialise system trusted keyrings Nov 12 20:51:53.183940 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:51:53.183959 kernel: Key type asymmetric registered Nov 12 20:51:53.183972 kernel: Asymmetric key parser 'x509' registered Nov 12 20:51:53.183984 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:51:53.184000 kernel: io scheduler mq-deadline registered Nov 12 20:51:53.184015 kernel: io scheduler kyber registered Nov 12 20:51:53.184031 kernel: io scheduler bfq registered Nov 12 20:51:53.184046 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:51:53.184059 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 12 20:51:53.184072 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:51:53.184087 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:51:53.184100 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:51:53.184113 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:51:53.184125 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:51:53.184136 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:51:53.184147 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:51:53.184159 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:51:53.184352 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 12 20:51:53.184495 kernel: rtc_cmos 00:03: registered as rtc0 Nov 12 20:51:53.184638 kernel: rtc_cmos 00:03: setting system clock to 2024-11-12T20:51:52 UTC (1731444712) Nov 12 20:51:53.184770 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 12 20:51:53.184789 kernel: intel_pstate: CPU model not supported Nov 12 20:51:53.184804 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:51:53.184821 kernel: Segment Routing with IPv6 Nov 12 20:51:53.184835 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:51:53.184848 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:51:53.184863 kernel: Key type dns_resolver registered Nov 12 20:51:53.184880 kernel: IPI shorthand broadcast: enabled Nov 12 20:51:53.184894 kernel: sched_clock: Marking stable (1534008502, 165860524)->(1800765246, -100896220) Nov 12 20:51:53.184908 kernel: registered taskstats version 1 Nov 12 20:51:53.184922 kernel: Loading compiled-in X.509 certificates Nov 12 20:51:53.184937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:51:53.184951 kernel: Key type .fscrypt registered Nov 12 20:51:53.184965 kernel: Key type fscrypt-provisioning registered Nov 12 20:51:53.184980 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:51:53.184999 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:51:53.185015 kernel: ima: No architecture policies found Nov 12 20:51:53.185031 kernel: clk: Disabling unused clocks Nov 12 20:51:53.185047 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:51:53.185062 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:51:53.185102 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:51:53.185122 kernel: Run /init as init process Nov 12 20:51:53.185139 kernel: with arguments: Nov 12 20:51:53.185155 kernel: /init Nov 12 20:51:53.185174 kernel: with environment: Nov 12 20:51:53.185203 kernel: HOME=/ Nov 12 20:51:53.185219 kernel: TERM=linux Nov 12 20:51:53.185235 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:51:53.185257 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:51:53.185278 systemd[1]: Detected virtualization kvm. Nov 12 20:51:53.185296 systemd[1]: Detected architecture x86-64. Nov 12 20:51:53.185312 systemd[1]: Running in initrd. Nov 12 20:51:53.185332 systemd[1]: No hostname configured, using default hostname. Nov 12 20:51:53.185348 systemd[1]: Hostname set to . Nov 12 20:51:53.185367 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:51:53.185384 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:51:53.185401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:51:53.185418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:51:53.185436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:51:53.185453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:51:53.185474 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:51:53.185491 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:51:53.185511 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:51:53.185529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:51:53.185546 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:51:53.185564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:51:53.185581 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:51:53.185601 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:51:53.185633 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:51:53.185651 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:51:53.185665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:51:53.185681 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:51:53.185700 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:51:53.185716 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:51:53.185732 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:51:53.185749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:51:53.185764 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:51:53.185780 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:51:53.185797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:51:53.185812 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:51:53.185829 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:51:53.185849 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:51:53.185867 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:51:53.185883 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:51:53.185901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:51:53.185917 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:51:53.185934 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:51:53.185950 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:51:53.186007 systemd-journald[181]: Collecting audit messages is disabled. Nov 12 20:51:53.186052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:51:53.186071 systemd-journald[181]: Journal started Nov 12 20:51:53.186144 systemd-journald[181]: Runtime Journal (/run/log/journal/ee4d180c2f1548109c801bf1115d6e29) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:51:53.191222 systemd-modules-load[182]: Inserted module 'overlay' Nov 12 20:51:53.195690 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:51:53.234026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:51:53.276001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:51:53.276077 kernel: Bridge firewalling registered Nov 12 20:51:53.245845 systemd-modules-load[182]: Inserted module 'br_netfilter' Nov 12 20:51:53.277124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:51:53.294473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:51:53.299727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:51:53.321385 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:51:53.325877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:51:53.329025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:51:53.335213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:51:53.358449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:51:53.365502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:51:53.368956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:51:53.387942 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:51:53.396934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:51:53.419717 dracut-cmdline[217]: dracut-dracut-053 Nov 12 20:51:53.424721 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:51:53.455599 systemd-resolved[220]: Positive Trust Anchors: Nov 12 20:51:53.455640 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:51:53.455699 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:51:53.459940 systemd-resolved[220]: Defaulting to hostname 'linux'. Nov 12 20:51:53.462667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:51:53.466831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:51:53.595304 kernel: SCSI subsystem initialized Nov 12 20:51:53.615329 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:51:53.652657 kernel: iscsi: registered transport (tcp) Nov 12 20:51:53.692412 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:51:53.692564 kernel: QLogic iSCSI HBA Driver Nov 12 20:51:53.803543 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:51:53.816036 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:51:53.872287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:51:53.872420 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:51:53.872829 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:51:53.965345 kernel: raid6: avx2x4 gen() 17137 MB/s Nov 12 20:51:53.984775 kernel: raid6: avx2x2 gen() 14951 MB/s Nov 12 20:51:54.002015 kernel: raid6: avx2x1 gen() 8834 MB/s Nov 12 20:51:54.002144 kernel: raid6: using algorithm avx2x4 gen() 17137 MB/s Nov 12 20:51:54.025262 kernel: raid6: .... xor() 4817 MB/s, rmw enabled Nov 12 20:51:54.025412 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:51:54.065441 kernel: xor: automatically using best checksumming function avx Nov 12 20:51:54.347681 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:51:54.384721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:51:54.395155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:51:54.439596 systemd-udevd[403]: Using default interface naming scheme 'v255'. Nov 12 20:51:54.447592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:51:54.463991 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:51:54.517723 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 12 20:51:54.586079 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:51:54.601961 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:51:54.687475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:51:54.697990 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:51:54.743462 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:51:54.746519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:51:54.748537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:51:54.751019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:51:54.758928 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:51:54.807077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:51:54.827669 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 12 20:51:54.923024 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:51:54.923066 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 12 20:51:54.923286 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:51:54.923305 kernel: GPT:9289727 != 125829119 Nov 12 20:51:54.923322 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:51:54.923336 kernel: GPT:9289727 != 125829119 Nov 12 20:51:54.923352 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:51:54.923368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:51:54.923392 kernel: ACPI: bus type USB registered Nov 12 20:51:54.923407 kernel: usbcore: registered new interface driver usbfs Nov 12 20:51:54.923422 kernel: usbcore: registered new interface driver hub Nov 12 20:51:54.923437 kernel: usbcore: registered new device driver usb Nov 12 20:51:54.923451 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 12 20:51:54.980790 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:51:54.981117 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Nov 12 20:51:54.981345 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:51:54.964291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:51:54.986337 kernel: AES CTR mode by8 optimization enabled Nov 12 20:51:54.964507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:51:54.965717 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:51:54.966591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:51:54.966904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:51:54.970425 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:51:54.986818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:51:55.024377 kernel: libata version 3.00 loaded. Nov 12 20:51:55.053123 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 12 20:51:55.055939 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 12 20:51:55.056139 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 12 20:51:55.056311 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 12 20:51:55.056517 kernel: hub 1-0:1.0: USB hub found Nov 12 20:51:55.056753 kernel: hub 1-0:1.0: 2 ports detected Nov 12 20:51:55.171912 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Nov 12 20:51:55.187771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:51:55.211220 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (451) Nov 12 20:51:55.211267 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 12 20:51:55.232282 kernel: scsi host1: ata_piix Nov 12 20:51:55.232552 kernel: scsi host2: ata_piix Nov 12 20:51:55.233110 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 12 20:51:55.233135 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 12 20:51:55.217355 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:51:55.227240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:51:55.253515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:51:55.260689 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:51:55.262820 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:51:55.274045 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:51:55.287173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:51:55.301610 disk-uuid[541]: Primary Header is updated. Nov 12 20:51:55.301610 disk-uuid[541]: Secondary Entries is updated. Nov 12 20:51:55.301610 disk-uuid[541]: Secondary Header is updated. Nov 12 20:51:55.311675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:51:55.322682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:51:55.328108 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:51:55.337697 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:51:56.338720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:51:56.340415 disk-uuid[544]: The operation has completed successfully. Nov 12 20:51:56.435731 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:51:56.436145 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:51:56.446296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:51:56.454943 sh[564]: Success Nov 12 20:51:56.476936 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:51:56.604364 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:51:56.613412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:51:56.637975 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:51:56.666851 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:51:56.666951 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:51:56.687414 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:51:56.687631 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:51:56.687661 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:51:56.712253 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:51:56.715457 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:51:56.730920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:51:56.738986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:51:56.786958 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:51:56.787086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:51:56.789380 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:51:56.793731 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:51:56.819930 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:51:56.822345 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:51:56.834664 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:51:56.846063 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:51:57.018353 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:51:57.029057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:51:57.066543 ignition[662]: Ignition 2.19.0 Nov 12 20:51:57.067658 ignition[662]: Stage: fetch-offline Nov 12 20:51:57.067745 ignition[662]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:57.067761 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:57.070761 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:51:57.067972 ignition[662]: parsed url from cmdline: "" Nov 12 20:51:57.067978 ignition[662]: no config URL provided Nov 12 20:51:57.067988 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:51:57.068008 ignition[662]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:51:57.068017 ignition[662]: failed to fetch config: resource requires networking Nov 12 20:51:57.069191 ignition[662]: Ignition finished successfully Nov 12 20:51:57.082034 systemd-networkd[752]: lo: Link UP Nov 12 20:51:57.082201 systemd-networkd[752]: lo: Gained carrier Nov 12 20:51:57.086958 systemd-networkd[752]: Enumeration completed Nov 12 20:51:57.087167 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:51:57.088415 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:51:57.088420 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 12 20:51:57.089321 systemd[1]: Reached target network.target - Network. Nov 12 20:51:57.090613 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:51:57.090661 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:51:57.092893 systemd-networkd[752]: eth0: Link UP Nov 12 20:51:57.092900 systemd-networkd[752]: eth0: Gained carrier Nov 12 20:51:57.092920 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:51:57.098129 systemd-networkd[752]: eth1: Link UP Nov 12 20:51:57.098135 systemd-networkd[752]: eth1: Gained carrier Nov 12 20:51:57.098155 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:51:57.105256 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:51:57.114793 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.10/20 acquired from 169.254.169.253 Nov 12 20:51:57.118302 systemd-networkd[752]: eth0: DHCPv4 address 137.184.81.153/20, gateway 137.184.80.1 acquired from 169.254.169.253 Nov 12 20:51:57.148031 ignition[755]: Ignition 2.19.0 Nov 12 20:51:57.149871 ignition[755]: Stage: fetch Nov 12 20:51:57.151007 ignition[755]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:57.151033 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:57.151339 ignition[755]: parsed url from cmdline: "" Nov 12 20:51:57.151346 ignition[755]: no config URL provided Nov 12 20:51:57.151356 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:51:57.151373 ignition[755]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:51:57.151409 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 12 20:51:57.182569 ignition[755]: GET result: OK Nov 12 20:51:57.183712 ignition[755]: parsing config with SHA512: 215f56cf78df0d7193c29b43188a18cafced58c34a2b12a6b29643d8bfefc31525b9cc27650ea41309b12454f328d09befa7f1dd31e5acffcff088ee78736366 Nov 12 20:51:57.200144 unknown[755]: fetched base config from "system" Nov 12 20:51:57.200163 unknown[755]: fetched base config from "system" Nov 12 20:51:57.200926 ignition[755]: fetch: fetch complete Nov 12 20:51:57.200171 unknown[755]: fetched user config from "digitalocean" Nov 12 20:51:57.200936 ignition[755]: fetch: fetch passed Nov 12 20:51:57.201024 ignition[755]: Ignition finished successfully Nov 12 20:51:57.206130 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:51:57.217327 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:51:57.261914 ignition[762]: Ignition 2.19.0 Nov 12 20:51:57.261971 ignition[762]: Stage: kargs Nov 12 20:51:57.262492 ignition[762]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:57.262508 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:57.266253 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:51:57.264101 ignition[762]: kargs: kargs passed Nov 12 20:51:57.264188 ignition[762]: Ignition finished successfully Nov 12 20:51:57.279049 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:51:57.305446 ignition[768]: Ignition 2.19.0 Nov 12 20:51:57.305464 ignition[768]: Stage: disks Nov 12 20:51:57.305955 ignition[768]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:57.305977 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:57.320989 ignition[768]: disks: disks passed Nov 12 20:51:57.323392 ignition[768]: Ignition finished successfully Nov 12 20:51:57.325081 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:51:57.327756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:51:57.329922 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:51:57.330829 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:51:57.333022 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:51:57.333587 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:51:57.347009 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:51:57.369992 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:51:57.374969 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:51:57.391921 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:51:57.556676 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:51:57.557845 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:51:57.559438 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:51:57.570022 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:51:57.575979 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:51:57.578876 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 12 20:51:57.592986 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:51:57.606220 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Nov 12 20:51:57.606269 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:51:57.606290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:51:57.606311 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:51:57.595183 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:51:57.595231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:51:57.619736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:51:57.625702 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:51:57.627977 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:51:57.655535 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:51:57.745289 coreos-metadata[787]: Nov 12 20:51:57.745 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:51:57.752398 coreos-metadata[786]: Nov 12 20:51:57.751 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:51:57.762128 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:51:57.764007 coreos-metadata[787]: Nov 12 20:51:57.762 INFO Fetch successful Nov 12 20:51:57.769970 coreos-metadata[786]: Nov 12 20:51:57.769 INFO Fetch successful Nov 12 20:51:57.776749 coreos-metadata[787]: Nov 12 20:51:57.775 INFO wrote hostname ci-4081.2.0-a-ee124ee133 to /sysroot/etc/hostname Nov 12 20:51:57.779608 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:51:57.784521 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:51:57.789600 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 12 20:51:57.789854 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 12 20:51:57.796130 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:51:57.802411 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:51:58.013657 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:51:58.023377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:51:58.031080 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:51:58.049016 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:51:58.051020 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:51:58.094613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:51:58.129933 ignition[907]: INFO : Ignition 2.19.0 Nov 12 20:51:58.129933 ignition[907]: INFO : Stage: mount Nov 12 20:51:58.129933 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:58.129933 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:58.134057 ignition[907]: INFO : mount: mount passed Nov 12 20:51:58.134057 ignition[907]: INFO : Ignition finished successfully Nov 12 20:51:58.134162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:51:58.143888 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:51:58.173540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:51:58.185673 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Nov 12 20:51:58.191518 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:51:58.191657 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:51:58.191678 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:51:58.196705 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:51:58.200295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:51:58.238876 ignition[935]: INFO : Ignition 2.19.0 Nov 12 20:51:58.238876 ignition[935]: INFO : Stage: files Nov 12 20:51:58.240931 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:51:58.240931 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:51:58.240931 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:51:58.245904 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:51:58.245904 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:51:58.247988 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:51:58.247988 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:51:58.250122 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:51:58.248429 unknown[935]: wrote ssh authorized keys file for user: core Nov 12 20:51:58.252747 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:51:58.252747 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:51:58.315047 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:51:58.446105 systemd-networkd[752]: eth0: Gained IPv6LL Nov 12 20:51:58.469280 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:51:58.469280 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:51:58.472107 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:51:58.949036 systemd-networkd[752]: eth1: Gained IPv6LL Nov 12 20:51:58.952768 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:51:59.091449 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:51:59.102420 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:51:59.105522 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:51:59.105522 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:51:59.107962 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:51:59.490777 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:52:00.002666 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:52:00.002666 ignition[935]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:52:00.007509 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:52:00.007509 ignition[935]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:52:00.007509 ignition[935]: INFO : files: files passed Nov 12 20:52:00.007509 ignition[935]: INFO : Ignition finished successfully Nov 12 20:52:00.008212 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:52:00.020687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:52:00.041045 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:52:00.053086 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:52:00.053918 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:52:00.066837 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:52:00.066837 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:52:00.069940 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:52:00.074805 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:52:00.076425 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:52:00.086319 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:52:00.163068 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:52:00.163245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:52:00.167138 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:52:00.169288 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:52:00.172133 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:52:00.182165 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:52:00.223760 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:52:00.234264 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:52:00.254352 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:52:00.256294 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:52:00.258055 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:52:00.258649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:52:00.258827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:52:00.262675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:52:00.282870 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:52:00.283775 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:52:00.286806 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:52:00.288797 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:52:00.290832 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:52:00.291866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:52:00.295413 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:52:00.298070 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:52:00.299182 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:52:00.299831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:52:00.300042 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:52:00.303331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:52:00.304475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:52:00.308040 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:52:00.313874 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:52:00.328102 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:52:00.328532 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:52:00.331053 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:52:00.331256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:52:00.335072 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:52:00.335272 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:52:00.336244 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:52:00.336410 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:52:00.360300 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:52:00.366553 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:52:00.368737 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:52:00.369010 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:52:00.374324 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:52:00.374535 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:52:00.385566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:52:00.390890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:52:00.403688 ignition[988]: INFO : Ignition 2.19.0 Nov 12 20:52:00.403688 ignition[988]: INFO : Stage: umount Nov 12 20:52:00.403688 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:52:00.403688 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:52:00.424894 ignition[988]: INFO : umount: umount passed Nov 12 20:52:00.424894 ignition[988]: INFO : Ignition finished successfully Nov 12 20:52:00.424144 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:52:00.424315 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:52:00.428878 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:52:00.428957 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:52:00.437295 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:52:00.438415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:52:00.439643 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:52:00.439741 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:52:00.440608 systemd[1]: Stopped target network.target - Network. Nov 12 20:52:00.468129 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:52:00.468265 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:52:00.469592 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:52:00.471554 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:52:00.472637 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:52:00.474205 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:52:00.475973 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:52:00.478138 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:52:00.478223 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:52:00.538144 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:52:00.538235 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:52:00.539282 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:52:00.539384 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:52:00.540074 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:52:00.540133 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:52:00.541326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:52:00.544046 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:52:00.545716 systemd-networkd[752]: eth0: DHCPv6 lease lost Nov 12 20:52:00.547508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:52:00.550913 systemd-networkd[752]: eth1: DHCPv6 lease lost Nov 12 20:52:00.553049 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:52:00.553278 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:52:00.607973 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:52:00.608246 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:52:00.611585 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:52:00.611872 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:52:00.615612 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:52:00.615769 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:52:00.631371 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:52:00.631491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:52:00.650017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:52:00.650747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:52:00.650871 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:52:00.652373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:52:00.652459 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:52:00.653280 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:52:00.653348 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:52:00.654104 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:52:00.654168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:52:00.655098 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:52:00.690311 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:52:00.690486 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:52:00.701768 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:52:00.702061 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:52:00.703956 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:52:00.704076 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:52:00.704954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:52:00.705034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:52:00.714537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:52:00.714772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:52:00.715745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:52:00.715830 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:52:00.723101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:52:00.723202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:52:00.748127 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:52:00.749637 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:52:00.749741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:52:00.750669 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:52:00.750743 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:52:00.751638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:52:00.751711 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:52:00.752517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:52:00.752589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:00.794071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:52:00.794637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:52:00.796741 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:52:00.807306 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:52:00.824612 systemd[1]: Switching root. Nov 12 20:52:00.864205 systemd-journald[181]: Journal stopped Nov 12 20:52:03.352531 systemd-journald[181]: Received SIGTERM from PID 1 (systemd). Nov 12 20:52:03.352713 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:52:03.352750 kernel: SELinux: policy capability open_perms=1 Nov 12 20:52:03.352767 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:52:03.352784 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:52:03.352800 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:52:03.352818 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:52:03.352836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:52:03.352853 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:52:03.352879 kernel: audit: type=1403 audit(1731444721.225:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:52:03.352906 systemd[1]: Successfully loaded SELinux policy in 63.311ms. Nov 12 20:52:03.352936 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.206ms. Nov 12 20:52:03.352957 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:52:03.352977 systemd[1]: Detected virtualization kvm. Nov 12 20:52:03.352997 systemd[1]: Detected architecture x86-64. Nov 12 20:52:03.353017 systemd[1]: Detected first boot. Nov 12 20:52:03.353036 systemd[1]: Hostname set to . Nov 12 20:52:03.353086 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:52:03.353345 zram_generator::config[1031]: No configuration found. Nov 12 20:52:03.353374 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:52:03.353392 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:52:03.353413 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:52:03.353433 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:52:03.353455 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:52:03.353473 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:52:03.353491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:52:03.353514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:52:03.353534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:52:03.353554 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:52:03.353574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:52:03.353602 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:52:03.356714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:52:03.356776 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:52:03.356797 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:52:03.356815 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:52:03.356847 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:52:03.356867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:52:03.356885 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:52:03.356903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:52:03.356921 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:52:03.356940 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:52:03.356966 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:52:03.356986 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:52:03.357006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:52:03.357024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:52:03.357042 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:52:03.357061 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:52:03.357080 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:52:03.357221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:52:03.357249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:52:03.357278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:52:03.357299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:52:03.357319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:52:03.357339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:52:03.357358 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:52:03.357375 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:52:03.357394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:03.357411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:52:03.357429 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:52:03.357454 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:52:03.357477 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:52:03.357497 systemd[1]: Reached target machines.target - Containers. Nov 12 20:52:03.357517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:52:03.357539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:52:03.357560 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:52:03.357579 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:52:03.357597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:52:03.357645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:52:03.357664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:52:03.357684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:52:03.357703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:52:03.357724 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:52:03.357745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:52:03.357765 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:52:03.357783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:52:03.357803 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:52:03.357860 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:52:03.357882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:52:03.357902 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:52:03.357921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:52:03.357939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:52:03.357956 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:52:03.357975 systemd[1]: Stopped verity-setup.service. Nov 12 20:52:03.357995 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:03.358012 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:52:03.358035 kernel: ACPI: bus type drm_connector registered Nov 12 20:52:03.358055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:52:03.358076 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:52:03.358208 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:52:03.358247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:52:03.358267 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:52:03.358285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:52:03.358308 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:52:03.358328 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:52:03.358346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:52:03.358370 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:52:03.358390 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:52:03.358428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:52:03.358447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:52:03.358464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:52:03.358482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:52:03.358502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:52:03.358523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:52:03.358546 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:52:03.363670 systemd-journald[1104]: Collecting audit messages is disabled. Nov 12 20:52:03.363779 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:52:03.363808 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:52:03.363830 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:52:03.364211 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:52:03.364286 kernel: loop: module loaded Nov 12 20:52:03.364315 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:52:03.364354 systemd-journald[1104]: Journal started Nov 12 20:52:03.364398 systemd-journald[1104]: Runtime Journal (/run/log/journal/ee4d180c2f1548109c801bf1115d6e29) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:52:02.530905 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:52:02.585993 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:52:02.586573 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:52:03.384889 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:52:03.385020 kernel: fuse: init (API version 7.39) Nov 12 20:52:03.385059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:52:03.399246 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:52:03.402766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:52:03.419759 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:52:03.438661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:52:03.445681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:52:03.458726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:52:03.466670 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:52:03.469962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:52:03.472875 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:52:03.473910 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:52:03.489704 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:52:03.489968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:52:03.491240 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:52:03.510937 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:52:03.531952 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:52:03.614168 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:52:03.612737 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:52:03.628893 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:52:03.646107 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:52:03.676984 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:52:03.680878 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:52:03.687769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:52:03.689033 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:52:03.733163 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:52:03.768057 systemd-journald[1104]: Time spent on flushing to /var/log/journal/ee4d180c2f1548109c801bf1115d6e29 is 56.911ms for 999 entries. Nov 12 20:52:03.768057 systemd-journald[1104]: System Journal (/var/log/journal/ee4d180c2f1548109c801bf1115d6e29) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:52:03.858195 systemd-journald[1104]: Received client request to flush runtime journal. Nov 12 20:52:03.858283 kernel: loop1: detected capacity change from 0 to 8 Nov 12 20:52:03.858321 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:52:03.854260 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Nov 12 20:52:03.854282 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Nov 12 20:52:03.868073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:52:03.873010 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:52:03.877399 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:52:03.907318 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:52:03.920921 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:52:03.923067 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:52:03.934671 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:52:03.928837 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:52:04.000459 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:52:04.047030 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:52:04.121672 kernel: loop5: detected capacity change from 0 to 8 Nov 12 20:52:04.121862 kernel: loop6: detected capacity change from 0 to 211296 Nov 12 20:52:04.181728 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:52:04.224480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:52:04.243197 kernel: loop7: detected capacity change from 0 to 142488 Nov 12 20:52:04.280811 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 12 20:52:04.282363 (sd-merge)[1175]: Merged extensions into '/usr'. Nov 12 20:52:04.329072 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:52:04.329444 systemd[1]: Reloading... Nov 12 20:52:04.332986 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 12 20:52:04.333021 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 12 20:52:04.650145 zram_generator::config[1205]: No configuration found. Nov 12 20:52:05.179975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:52:05.239656 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:52:05.312827 systemd[1]: Reloading finished in 982 ms. Nov 12 20:52:05.363873 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:52:05.367770 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:52:05.369527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:52:05.420960 systemd[1]: Starting ensure-sysext.service... Nov 12 20:52:05.426300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:52:05.445842 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:52:05.445869 systemd[1]: Reloading... Nov 12 20:52:05.556579 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:52:05.557771 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:52:05.561978 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:52:05.562442 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 12 20:52:05.562544 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 12 20:52:05.570703 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:52:05.571701 systemd-tmpfiles[1250]: Skipping /boot Nov 12 20:52:05.618299 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:52:05.619890 systemd-tmpfiles[1250]: Skipping /boot Nov 12 20:52:05.750768 zram_generator::config[1279]: No configuration found. Nov 12 20:52:06.072778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:52:06.189520 systemd[1]: Reloading finished in 743 ms. Nov 12 20:52:06.231584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:52:06.256447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:52:06.286794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:52:06.304412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:52:06.317884 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:52:06.334602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:52:06.342101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:52:06.355259 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:52:06.362520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.364234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:52:06.380848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:52:06.386171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:52:06.398230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:52:06.399922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:52:06.400139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.413374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:52:06.413700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:52:06.431750 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:52:06.446895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.448378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:52:06.465974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:52:06.468194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:52:06.496599 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:52:06.497410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.500892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:52:06.501659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:52:06.504531 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:52:06.505168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:52:06.519276 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Nov 12 20:52:06.522478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:52:06.523292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:52:06.527590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.529454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:52:06.545999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:52:06.557997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:52:06.566944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:52:06.568532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:52:06.568764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.569760 systemd[1]: Finished ensure-sysext.service. Nov 12 20:52:06.571684 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:52:06.595903 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:52:06.600793 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:52:06.606345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:52:06.606639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:52:06.608901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:52:06.637273 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:52:06.638790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:52:06.645531 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:52:06.647749 augenrules[1360]: No rules Nov 12 20:52:06.646681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:52:06.650474 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:52:06.651844 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:52:06.663810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:52:06.663919 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:52:06.678044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:52:06.681633 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:52:06.712148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:52:06.739336 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:52:06.950998 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 12 20:52:06.953400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:06.953754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:52:06.971998 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1374) Nov 12 20:52:06.964362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:52:06.995667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Nov 12 20:52:07.024231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:52:07.035429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:52:07.036610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:52:07.036696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:52:07.036722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:52:07.057840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:52:07.058763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:52:07.069331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:52:07.069633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:52:07.072038 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:52:07.072498 systemd-networkd[1375]: lo: Link UP Nov 12 20:52:07.072907 systemd-networkd[1375]: lo: Gained carrier Nov 12 20:52:07.074468 systemd-networkd[1375]: Enumeration completed Nov 12 20:52:07.074838 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:52:07.092899 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:52:07.120964 kernel: ISO 9660 Extensions: RRIP_1991A Nov 12 20:52:07.125999 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 12 20:52:07.141169 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1374) Nov 12 20:52:07.139239 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:52:07.139552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:52:07.145709 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:52:07.145987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:52:07.191796 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-56:22:12:3e:3a:50.network. Nov 12 20:52:07.193018 systemd-networkd[1375]: eth0: Link UP Nov 12 20:52:07.193218 systemd-networkd[1375]: eth0: Gained carrier Nov 12 20:52:07.217805 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:52:07.219440 systemd-resolved[1329]: Positive Trust Anchors: Nov 12 20:52:07.219463 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:52:07.219511 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:52:07.219965 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:52:07.233506 systemd-resolved[1329]: Using system hostname 'ci-4081.2.0-a-ee124ee133'. Nov 12 20:52:07.238116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:52:07.241333 systemd[1]: Reached target network.target - Network. Nov 12 20:52:07.252788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:52:07.274382 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-ce:ce:ea:d6:b6:48.network. Nov 12 20:52:07.277522 systemd-networkd[1375]: eth1: Link UP Nov 12 20:52:07.278182 systemd-networkd[1375]: eth1: Gained carrier Nov 12 20:52:07.333672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:52:07.335070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:52:07.355652 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 12 20:52:07.367363 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:52:07.356024 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:52:07.395477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:52:07.424101 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:52:07.485700 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 12 20:52:07.493690 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 12 20:52:07.497420 systemd-timesyncd[1356]: Contacted time server 12.203.31.102:123 (0.flatcar.pool.ntp.org). Nov 12 20:52:07.497527 systemd-timesyncd[1356]: Initial clock synchronization to Tue 2024-11-12 20:52:07.443105 UTC. Nov 12 20:52:07.502662 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:52:07.531042 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:52:07.532095 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 12 20:52:07.532203 kernel: [drm] features: -context_init Nov 12 20:52:07.536732 kernel: [drm] number of scanouts: 1 Nov 12 20:52:07.536842 kernel: [drm] number of cap sets: 0 Nov 12 20:52:07.537857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:07.552657 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 12 20:52:07.561225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:52:07.565851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:07.597425 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 12 20:52:07.597554 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:52:07.597386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:07.624645 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 12 20:52:07.681703 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:52:07.683957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:07.706105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:52:07.777503 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:52:07.818905 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:52:07.839280 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:52:07.899941 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:52:07.902611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:52:07.943174 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:52:07.949591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:52:07.951908 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:52:07.952255 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:52:07.952460 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:52:07.952941 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:52:07.956270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:52:07.962769 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:52:07.969508 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:52:07.969655 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:52:07.971918 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:52:07.973492 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:52:07.981397 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:52:07.998043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:52:08.011299 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:52:08.019280 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:52:08.024835 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:52:08.026668 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:52:08.028016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:52:08.028060 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:52:08.044933 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:52:08.054043 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:52:08.082113 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:52:08.100036 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:52:08.109440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:52:08.140521 jq[1446]: false Nov 12 20:52:08.158277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:52:08.159748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:52:08.178066 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:52:08.196910 coreos-metadata[1442]: Nov 12 20:52:08.196 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:52:08.215373 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:52:08.227469 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:52:08.242483 coreos-metadata[1442]: Nov 12 20:52:08.233 INFO Fetch successful Nov 12 20:52:08.238027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:52:08.250184 systemd-networkd[1375]: eth0: Gained IPv6LL Nov 12 20:52:08.287758 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:52:08.289457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:52:08.294502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:52:08.304982 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:52:08.317032 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:52:08.351544 extend-filesystems[1447]: Found loop4 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found loop5 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found loop6 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found loop7 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda1 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda2 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda3 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found usr Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda4 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda6 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda7 Nov 12 20:52:08.351544 extend-filesystems[1447]: Found vda9 Nov 12 20:52:08.351544 extend-filesystems[1447]: Checking size of /dev/vda9 Nov 12 20:52:08.325827 dbus-daemon[1443]: [system] SELinux support is enabled Nov 12 20:52:08.322404 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:52:08.328935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:52:08.395266 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:52:08.457273 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:52:08.458764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:52:08.461531 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:52:08.480262 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:52:08.483727 jq[1457]: true Nov 12 20:52:08.500840 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:52:08.501245 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:52:08.515662 update_engine[1455]: I20241112 20:52:08.458496 1455 main.cc:92] Flatcar Update Engine starting Nov 12 20:52:08.526078 update_engine[1455]: I20241112 20:52:08.525111 1455 update_check_scheduler.cc:74] Next update check in 2m18s Nov 12 20:52:08.526199 extend-filesystems[1447]: Resized partition /dev/vda9 Nov 12 20:52:08.553822 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:52:08.605188 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 12 20:52:08.575523 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:52:08.629950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:08.652501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:52:08.663881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1379) Nov 12 20:52:08.653408 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:52:08.653466 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:52:08.668338 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:52:08.672672 jq[1472]: true Nov 12 20:52:08.668468 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 12 20:52:08.668507 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:52:08.697172 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:52:08.724169 tar[1471]: linux-amd64/helm Nov 12 20:52:08.732957 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:52:08.839362 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:52:08.871531 systemd-networkd[1375]: eth1: Gained IPv6LL Nov 12 20:52:08.946438 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:52:08.972166 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:52:09.013209 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 12 20:52:09.018389 systemd-logind[1453]: New seat seat0. Nov 12 20:52:09.111487 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:52:09.111516 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:52:09.111981 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:52:09.181764 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:52:09.181764 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 12 20:52:09.181764 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 12 20:52:09.212821 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Nov 12 20:52:09.212821 extend-filesystems[1447]: Found vdb Nov 12 20:52:09.198922 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:52:09.199224 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:52:09.304877 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:52:09.343575 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:52:09.344372 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:52:09.390364 systemd[1]: Starting sshkeys.service... Nov 12 20:52:09.447424 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:52:09.463025 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:52:09.616811 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:52:09.658577 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:52:09.716349 coreos-metadata[1527]: Nov 12 20:52:09.711 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:52:09.743094 coreos-metadata[1527]: Nov 12 20:52:09.742 INFO Fetch successful Nov 12 20:52:09.785304 unknown[1527]: wrote ssh authorized keys file for user: core Nov 12 20:52:09.874125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:52:09.896789 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:52:09.928327 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:52:09.936445 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:52:09.945911 systemd[1]: Finished sshkeys.service. Nov 12 20:52:09.957017 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:52:09.958748 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:52:09.998040 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:52:10.079479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:52:10.136424 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:52:10.158521 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:52:10.167441 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:52:10.174671 containerd[1483]: time="2024-11-12T20:52:10.173763157Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:52:10.350763 containerd[1483]: time="2024-11-12T20:52:10.349042143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.366091 containerd[1483]: time="2024-11-12T20:52:10.365975233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:52:10.366091 containerd[1483]: time="2024-11-12T20:52:10.366067626Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:52:10.366480 containerd[1483]: time="2024-11-12T20:52:10.366127670Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:52:10.366480 containerd[1483]: time="2024-11-12T20:52:10.366432120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:52:10.366559 containerd[1483]: time="2024-11-12T20:52:10.366486268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.366672 containerd[1483]: time="2024-11-12T20:52:10.366608742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:52:10.366672 containerd[1483]: time="2024-11-12T20:52:10.366674057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.367238 containerd[1483]: time="2024-11-12T20:52:10.367160949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:52:10.367432 containerd[1483]: time="2024-11-12T20:52:10.367240743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.367432 containerd[1483]: time="2024-11-12T20:52:10.367266287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:52:10.367432 containerd[1483]: time="2024-11-12T20:52:10.367285401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.367941 containerd[1483]: time="2024-11-12T20:52:10.367440328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.372808 containerd[1483]: time="2024-11-12T20:52:10.372732063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:52:10.377667 containerd[1483]: time="2024-11-12T20:52:10.377181839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:52:10.377667 containerd[1483]: time="2024-11-12T20:52:10.377263022Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:52:10.377667 containerd[1483]: time="2024-11-12T20:52:10.377565803Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:52:10.380309 containerd[1483]: time="2024-11-12T20:52:10.379859355Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:52:10.407774 containerd[1483]: time="2024-11-12T20:52:10.405604705Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:52:10.407774 containerd[1483]: time="2024-11-12T20:52:10.407568011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:52:10.407774 containerd[1483]: time="2024-11-12T20:52:10.407692581Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:52:10.407774 containerd[1483]: time="2024-11-12T20:52:10.407762220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:52:10.407774 containerd[1483]: time="2024-11-12T20:52:10.407829690Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:52:10.408747 containerd[1483]: time="2024-11-12T20:52:10.408211872Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412093360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412532639Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412574222Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412601818Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412692734Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412733468Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412754836Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412781754Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412808930Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412832893Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412856794Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412880968Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412920676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413082 containerd[1483]: time="2024-11-12T20:52:10.412941511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413129344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413162688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413186797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413211093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413235129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413265868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413289552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413316967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413337976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413366516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413389861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413423916Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413497227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413542426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.413776 containerd[1483]: time="2024-11-12T20:52:10.413562347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.415852226Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.415985238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416014746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416040055Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416057900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416083542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416104383Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:52:10.417660 containerd[1483]: time="2024-11-12T20:52:10.416131751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:52:10.418171 containerd[1483]: time="2024-11-12T20:52:10.416552384Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:52:10.418171 containerd[1483]: time="2024-11-12T20:52:10.416689812Z" level=info msg="Connect containerd service" Nov 12 20:52:10.418171 containerd[1483]: time="2024-11-12T20:52:10.416776287Z" level=info msg="using legacy CRI server" Nov 12 20:52:10.418171 containerd[1483]: time="2024-11-12T20:52:10.416791531Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:52:10.418171 containerd[1483]: time="2024-11-12T20:52:10.416970839Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.423288369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424127372Z" level=info msg="Start subscribing containerd event" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424294308Z" level=info msg="Start recovering state" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424449467Z" level=info msg="Start event monitor" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424481073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424480327Z" level=info msg="Start snapshots syncer" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424583433Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:52:10.425760 containerd[1483]: time="2024-11-12T20:52:10.424597930Z" level=info msg="Start streaming server" Nov 12 20:52:10.431543 containerd[1483]: time="2024-11-12T20:52:10.428229429Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:52:10.431543 containerd[1483]: time="2024-11-12T20:52:10.428436112Z" level=info msg="containerd successfully booted in 0.256753s" Nov 12 20:52:10.428753 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:52:11.018513 tar[1471]: linux-amd64/LICENSE Nov 12 20:52:11.020991 tar[1471]: linux-amd64/README.md Nov 12 20:52:11.060136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:52:11.739573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:11.744936 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:52:11.750897 systemd[1]: Startup finished in 1.733s (kernel) + 8.428s (initrd) + 10.585s (userspace) = 20.747s. Nov 12 20:52:11.758189 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:52:13.036727 kubelet[1567]: E1112 20:52:13.036472 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:52:13.039951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:52:13.040203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:52:13.040683 systemd[1]: kubelet.service: Consumed 1.691s CPU time. Nov 12 20:52:17.814529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:52:17.832199 systemd[1]: Started sshd@0-137.184.81.153:22-139.178.68.195:56506.service - OpenSSH per-connection server daemon (139.178.68.195:56506). Nov 12 20:52:17.932126 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 56506 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:17.936316 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:17.961791 systemd-logind[1453]: New session 1 of user core. Nov 12 20:52:17.962949 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:52:17.970245 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:52:18.029555 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:52:18.056207 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:52:18.084197 (systemd)[1584]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:52:18.269073 systemd[1584]: Queued start job for default target default.target. Nov 12 20:52:18.282489 systemd[1584]: Created slice app.slice - User Application Slice. Nov 12 20:52:18.282787 systemd[1584]: Reached target paths.target - Paths. Nov 12 20:52:18.282906 systemd[1584]: Reached target timers.target - Timers. Nov 12 20:52:18.289912 systemd[1584]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:52:18.318141 systemd[1584]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:52:18.318345 systemd[1584]: Reached target sockets.target - Sockets. Nov 12 20:52:18.318369 systemd[1584]: Reached target basic.target - Basic System. Nov 12 20:52:18.318441 systemd[1584]: Reached target default.target - Main User Target. Nov 12 20:52:18.318487 systemd[1584]: Startup finished in 210ms. Nov 12 20:52:18.318648 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:52:18.333428 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:52:18.427707 systemd[1]: Started sshd@1-137.184.81.153:22-139.178.68.195:56518.service - OpenSSH per-connection server daemon (139.178.68.195:56518). Nov 12 20:52:18.535110 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 56518 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:18.538381 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:18.550022 systemd-logind[1453]: New session 2 of user core. Nov 12 20:52:18.558647 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:52:18.641761 sshd[1595]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:18.661327 systemd[1]: sshd@1-137.184.81.153:22-139.178.68.195:56518.service: Deactivated successfully. Nov 12 20:52:18.669647 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:52:18.674800 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:52:18.684425 systemd[1]: Started sshd@2-137.184.81.153:22-139.178.68.195:56528.service - OpenSSH per-connection server daemon (139.178.68.195:56528). Nov 12 20:52:18.687067 systemd-logind[1453]: Removed session 2. Nov 12 20:52:18.780699 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 56528 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:18.774443 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:18.788031 systemd-logind[1453]: New session 3 of user core. Nov 12 20:52:18.797875 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:52:18.874979 sshd[1602]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:18.886433 systemd[1]: sshd@2-137.184.81.153:22-139.178.68.195:56528.service: Deactivated successfully. Nov 12 20:52:18.891841 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:52:18.895342 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:52:18.902234 systemd[1]: Started sshd@3-137.184.81.153:22-139.178.68.195:56534.service - OpenSSH per-connection server daemon (139.178.68.195:56534). Nov 12 20:52:18.907128 systemd-logind[1453]: Removed session 3. Nov 12 20:52:18.979087 sshd[1609]: Accepted publickey for core from 139.178.68.195 port 56534 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:18.975609 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:18.986475 systemd-logind[1453]: New session 4 of user core. Nov 12 20:52:18.997822 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:52:19.074392 sshd[1609]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:19.090164 systemd[1]: sshd@3-137.184.81.153:22-139.178.68.195:56534.service: Deactivated successfully. Nov 12 20:52:19.104111 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:52:19.114669 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:52:19.137472 systemd[1]: Started sshd@4-137.184.81.153:22-139.178.68.195:56540.service - OpenSSH per-connection server daemon (139.178.68.195:56540). Nov 12 20:52:19.140526 systemd-logind[1453]: Removed session 4. Nov 12 20:52:19.206122 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 56540 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:19.207997 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:19.215915 systemd-logind[1453]: New session 5 of user core. Nov 12 20:52:19.229000 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:52:19.329185 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:52:19.331954 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:52:19.349813 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 12 20:52:19.360091 sshd[1616]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:19.372906 systemd[1]: sshd@4-137.184.81.153:22-139.178.68.195:56540.service: Deactivated successfully. Nov 12 20:52:19.377917 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:52:19.380204 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:52:19.391242 systemd[1]: Started sshd@5-137.184.81.153:22-139.178.68.195:56554.service - OpenSSH per-connection server daemon (139.178.68.195:56554). Nov 12 20:52:19.394228 systemd-logind[1453]: Removed session 5. Nov 12 20:52:19.463811 sshd[1624]: Accepted publickey for core from 139.178.68.195 port 56554 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:19.469872 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:19.477920 systemd-logind[1453]: New session 6 of user core. Nov 12 20:52:19.487170 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:52:19.560812 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:52:19.561569 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:52:19.576449 sudo[1628]: pam_unix(sudo:session): session closed for user root Nov 12 20:52:19.586390 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:52:19.587085 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:52:19.619576 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:52:19.622401 auditctl[1631]: No rules Nov 12 20:52:19.624134 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:52:19.624457 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:52:19.630467 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:52:19.693508 augenrules[1649]: No rules Nov 12 20:52:19.695073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:52:19.701588 sudo[1627]: pam_unix(sudo:session): session closed for user root Nov 12 20:52:19.715356 sshd[1624]: pam_unix(sshd:session): session closed for user core Nov 12 20:52:19.725400 systemd[1]: sshd@5-137.184.81.153:22-139.178.68.195:56554.service: Deactivated successfully. Nov 12 20:52:19.727747 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:52:19.733576 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:52:19.751981 systemd[1]: Started sshd@6-137.184.81.153:22-139.178.68.195:56568.service - OpenSSH per-connection server daemon (139.178.68.195:56568). Nov 12 20:52:19.762809 systemd-logind[1453]: Removed session 6. Nov 12 20:52:19.834661 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 56568 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:52:19.838652 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:52:19.856206 systemd-logind[1453]: New session 7 of user core. Nov 12 20:52:19.865054 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:52:19.945265 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:52:19.953481 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:52:20.739420 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:52:20.740350 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:52:21.492116 dockerd[1676]: time="2024-11-12T20:52:21.491283636Z" level=info msg="Starting up" Nov 12 20:52:21.661166 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport5051262-merged.mount: Deactivated successfully. Nov 12 20:52:21.677064 systemd[1]: var-lib-docker-metacopy\x2dcheck1194664363-merged.mount: Deactivated successfully. Nov 12 20:52:21.706781 dockerd[1676]: time="2024-11-12T20:52:21.706524518Z" level=info msg="Loading containers: start." Nov 12 20:52:21.948706 kernel: Initializing XFRM netlink socket Nov 12 20:52:22.125325 systemd-networkd[1375]: docker0: Link UP Nov 12 20:52:22.158846 dockerd[1676]: time="2024-11-12T20:52:22.158756713Z" level=info msg="Loading containers: done." Nov 12 20:52:22.196480 dockerd[1676]: time="2024-11-12T20:52:22.195275660Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:52:22.196480 dockerd[1676]: time="2024-11-12T20:52:22.195517451Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:52:22.196480 dockerd[1676]: time="2024-11-12T20:52:22.195752598Z" level=info msg="Daemon has completed initialization" Nov 12 20:52:22.289122 dockerd[1676]: time="2024-11-12T20:52:22.289003910Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:52:22.289737 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:52:23.126750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:52:23.140391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:23.470561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:23.489419 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:52:23.767804 kubelet[1833]: E1112 20:52:23.767524 1833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:52:23.778324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:52:23.778557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:52:23.944877 containerd[1483]: time="2024-11-12T20:52:23.944258010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:52:24.814907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469782958.mount: Deactivated successfully. Nov 12 20:52:29.238677 containerd[1483]: time="2024-11-12T20:52:29.238352247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:29.241662 containerd[1483]: time="2024-11-12T20:52:29.241179702Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:52:29.242568 containerd[1483]: time="2024-11-12T20:52:29.242476594Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:29.248848 containerd[1483]: time="2024-11-12T20:52:29.248729072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:29.281844 containerd[1483]: time="2024-11-12T20:52:29.276838532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 5.332512267s" Nov 12 20:52:29.281844 containerd[1483]: time="2024-11-12T20:52:29.276937414Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:52:29.375990 containerd[1483]: time="2024-11-12T20:52:29.375637988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:52:32.518982 containerd[1483]: time="2024-11-12T20:52:32.517709796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:32.521338 containerd[1483]: time="2024-11-12T20:52:32.521150906Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:52:32.522553 containerd[1483]: time="2024-11-12T20:52:32.522486587Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:32.529982 containerd[1483]: time="2024-11-12T20:52:32.529753577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:32.533556 containerd[1483]: time="2024-11-12T20:52:32.531451179Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 3.155744918s" Nov 12 20:52:32.533556 containerd[1483]: time="2024-11-12T20:52:32.532809454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:52:32.599480 containerd[1483]: time="2024-11-12T20:52:32.599028082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:52:32.602080 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 12 20:52:33.964837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:52:33.973257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:34.398104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:34.402053 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:52:34.570775 kubelet[1925]: E1112 20:52:34.569489 1925 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:52:34.575776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:52:34.576012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:52:34.744046 containerd[1483]: time="2024-11-12T20:52:34.743924878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.748949 containerd[1483]: time="2024-11-12T20:52:34.748726476Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:52:34.758667 containerd[1483]: time="2024-11-12T20:52:34.754798767Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.769216 containerd[1483]: time="2024-11-12T20:52:34.767547944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:34.771447 containerd[1483]: time="2024-11-12T20:52:34.771327988Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 2.172239578s" Nov 12 20:52:34.771447 containerd[1483]: time="2024-11-12T20:52:34.771402297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:52:34.821677 containerd[1483]: time="2024-11-12T20:52:34.821238003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:52:35.685073 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 12 20:52:36.598430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955505451.mount: Deactivated successfully. Nov 12 20:52:37.736603 containerd[1483]: time="2024-11-12T20:52:37.736506828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:37.739073 containerd[1483]: time="2024-11-12T20:52:37.738412914Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:52:37.739661 containerd[1483]: time="2024-11-12T20:52:37.739528602Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:37.743760 containerd[1483]: time="2024-11-12T20:52:37.743642607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:37.748169 containerd[1483]: time="2024-11-12T20:52:37.747680836Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 2.926293775s" Nov 12 20:52:37.748169 containerd[1483]: time="2024-11-12T20:52:37.747975692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:52:37.808689 containerd[1483]: time="2024-11-12T20:52:37.808223443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:52:38.550406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872096021.mount: Deactivated successfully. Nov 12 20:52:40.194260 containerd[1483]: time="2024-11-12T20:52:40.194173532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.197354 containerd[1483]: time="2024-11-12T20:52:40.196967879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:52:40.205701 containerd[1483]: time="2024-11-12T20:52:40.205528910Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.216678 containerd[1483]: time="2024-11-12T20:52:40.216509604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.219439 containerd[1483]: time="2024-11-12T20:52:40.217992677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.40969559s" Nov 12 20:52:40.219439 containerd[1483]: time="2024-11-12T20:52:40.219448088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:52:40.260055 containerd[1483]: time="2024-11-12T20:52:40.259982786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:52:40.263337 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 12 20:52:40.953561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855227750.mount: Deactivated successfully. Nov 12 20:52:40.982905 containerd[1483]: time="2024-11-12T20:52:40.982823056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.985543 containerd[1483]: time="2024-11-12T20:52:40.985453648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:52:40.986476 containerd[1483]: time="2024-11-12T20:52:40.986420027Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.993697 containerd[1483]: time="2024-11-12T20:52:40.992811926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:40.994547 containerd[1483]: time="2024-11-12T20:52:40.994289817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 734.243772ms" Nov 12 20:52:40.994547 containerd[1483]: time="2024-11-12T20:52:40.994361382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:52:41.065003 containerd[1483]: time="2024-11-12T20:52:41.064914164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:52:41.824897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149658602.mount: Deactivated successfully. Nov 12 20:52:44.715479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:52:44.725053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:45.068273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:45.088354 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:52:45.234683 kubelet[2059]: E1112 20:52:45.233537 2059 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:52:45.237053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:52:45.237694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:52:45.893890 containerd[1483]: time="2024-11-12T20:52:45.893004053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:45.895850 containerd[1483]: time="2024-11-12T20:52:45.895543151Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:52:45.905136 containerd[1483]: time="2024-11-12T20:52:45.904900950Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:45.914241 containerd[1483]: time="2024-11-12T20:52:45.912085043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:52:45.914241 containerd[1483]: time="2024-11-12T20:52:45.913974877Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.848986363s" Nov 12 20:52:45.914241 containerd[1483]: time="2024-11-12T20:52:45.914048453Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:52:51.092336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:51.119338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:51.178539 systemd[1]: Reloading requested from client PID 2137 ('systemctl') (unit session-7.scope)... Nov 12 20:52:51.178866 systemd[1]: Reloading... Nov 12 20:52:51.480672 zram_generator::config[2177]: No configuration found. Nov 12 20:52:51.775274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:52:51.961874 systemd[1]: Reloading finished in 782 ms. Nov 12 20:52:52.110672 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:52:52.110826 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:52:52.113402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:52.142018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:52:52.375962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:52:52.402386 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:52:52.538614 kubelet[2230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:52:52.539847 kubelet[2230]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:52:52.539847 kubelet[2230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:52:52.552645 kubelet[2230]: I1112 20:52:52.543547 2230 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:52:53.473686 kubelet[2230]: I1112 20:52:53.473099 2230 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:52:53.473686 kubelet[2230]: I1112 20:52:53.473181 2230 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:52:53.473686 kubelet[2230]: I1112 20:52:53.473559 2230 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:52:53.519058 kubelet[2230]: I1112 20:52:53.518754 2230 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:52:53.524792 kubelet[2230]: E1112 20:52:53.524613 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.81.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.569787 kubelet[2230]: I1112 20:52:53.568749 2230 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:52:53.571118 kubelet[2230]: I1112 20:52:53.570933 2230 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:52:53.575140 kubelet[2230]: I1112 20:52:53.574232 2230 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:52:53.575140 kubelet[2230]: I1112 20:52:53.574330 2230 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:52:53.575140 kubelet[2230]: I1112 20:52:53.574347 2230 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:52:53.575140 kubelet[2230]: I1112 20:52:53.574682 2230 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:52:53.577105 kubelet[2230]: I1112 20:52:53.575736 2230 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:52:53.577105 kubelet[2230]: I1112 20:52:53.576489 2230 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:52:53.577105 kubelet[2230]: I1112 20:52:53.576556 2230 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:52:53.577105 kubelet[2230]: I1112 20:52:53.576581 2230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:52:53.580327 kubelet[2230]: W1112 20:52:53.579864 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.81.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-ee124ee133&limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.580327 kubelet[2230]: E1112 20:52:53.579979 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.81.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-ee124ee133&limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.583060 kubelet[2230]: W1112 20:52:53.582453 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.81.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.583060 kubelet[2230]: E1112 20:52:53.582513 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.81.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.583060 kubelet[2230]: I1112 20:52:53.582692 2230 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:52:53.594666 kubelet[2230]: I1112 20:52:53.593694 2230 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:52:53.594666 kubelet[2230]: W1112 20:52:53.593847 2230 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:52:53.595555 kubelet[2230]: I1112 20:52:53.595525 2230 server.go:1256] "Started kubelet" Nov 12 20:52:53.603518 kubelet[2230]: I1112 20:52:53.603458 2230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:52:53.623965 kubelet[2230]: I1112 20:52:53.620639 2230 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:52:53.623965 kubelet[2230]: E1112 20:52:53.623465 2230 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.81.153:6443/api/v1/namespaces/default/events\": dial tcp 137.184.81.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-ee124ee133.180753d9165d87e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-ee124ee133,UID:ci-4081.2.0-a-ee124ee133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-ee124ee133,},FirstTimestamp:2024-11-12 20:52:53.595482089 +0000 UTC m=+1.185987286,LastTimestamp:2024-11-12 20:52:53.595482089 +0000 UTC m=+1.185987286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-ee124ee133,}" Nov 12 20:52:53.633060 kubelet[2230]: I1112 20:52:53.633004 2230 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:52:53.636332 kubelet[2230]: I1112 20:52:53.636283 2230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:52:53.637249 kubelet[2230]: I1112 20:52:53.637211 2230 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:52:53.640170 kubelet[2230]: I1112 20:52:53.640125 2230 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:52:53.645710 kubelet[2230]: I1112 20:52:53.645661 2230 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:52:53.646002 kubelet[2230]: I1112 20:52:53.645984 2230 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:52:53.651133 kubelet[2230]: E1112 20:52:53.650332 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.81.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-ee124ee133?timeout=10s\": dial tcp 137.184.81.153:6443: connect: connection refused" interval="200ms" Nov 12 20:52:53.651133 kubelet[2230]: W1112 20:52:53.650734 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.81.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.651133 kubelet[2230]: E1112 20:52:53.650892 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.81.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.659704 kubelet[2230]: E1112 20:52:53.659035 2230 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:52:53.659704 kubelet[2230]: I1112 20:52:53.659250 2230 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:52:53.659704 kubelet[2230]: I1112 20:52:53.659264 2230 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:52:53.659704 kubelet[2230]: I1112 20:52:53.659380 2230 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:52:53.692406 kubelet[2230]: I1112 20:52:53.692349 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:52:53.698136 kubelet[2230]: I1112 20:52:53.698073 2230 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:52:53.698401 kubelet[2230]: I1112 20:52:53.698370 2230 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:52:53.698545 kubelet[2230]: I1112 20:52:53.698515 2230 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:52:53.699244 kubelet[2230]: E1112 20:52:53.699203 2230 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:52:53.710173 kubelet[2230]: I1112 20:52:53.710094 2230 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:52:53.710964 kubelet[2230]: I1112 20:52:53.710449 2230 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:52:53.711650 kubelet[2230]: I1112 20:52:53.711485 2230 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:52:53.715791 kubelet[2230]: W1112 20:52:53.710912 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.81.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.715791 kubelet[2230]: E1112 20:52:53.715363 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.81.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:53.722214 kubelet[2230]: I1112 20:52:53.722130 2230 policy_none.go:49] "None policy: Start" Nov 12 20:52:53.728374 kubelet[2230]: I1112 20:52:53.726952 2230 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:52:53.728374 kubelet[2230]: I1112 20:52:53.727018 2230 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:52:53.742238 kubelet[2230]: I1112 20:52:53.742034 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:53.743463 kubelet[2230]: E1112 20:52:53.743408 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.81.153:6443/api/v1/nodes\": dial tcp 137.184.81.153:6443: connect: connection refused" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:53.765359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:52:53.791982 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:52:53.805180 kubelet[2230]: E1112 20:52:53.805086 2230 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:52:53.828011 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:52:53.851898 kubelet[2230]: E1112 20:52:53.851303 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.81.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-ee124ee133?timeout=10s\": dial tcp 137.184.81.153:6443: connect: connection refused" interval="400ms" Nov 12 20:52:53.855431 kubelet[2230]: I1112 20:52:53.853536 2230 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:52:53.855431 kubelet[2230]: I1112 20:52:53.855315 2230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:52:53.871163 kubelet[2230]: E1112 20:52:53.871073 2230 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-a-ee124ee133\" not found" Nov 12 20:52:53.945649 kubelet[2230]: I1112 20:52:53.945560 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:53.946419 kubelet[2230]: E1112 20:52:53.946364 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.81.153:6443/api/v1/nodes\": dial tcp 137.184.81.153:6443: connect: connection refused" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.005254 update_engine[1455]: I20241112 20:52:54.004367 1455 update_attempter.cc:509] Updating boot flags... Nov 12 20:52:54.007453 kubelet[2230]: I1112 20:52:54.006713 2230 topology_manager.go:215] "Topology Admit Handler" podUID="4c335f70bb362172bf0308dc9f159044" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.009507 kubelet[2230]: I1112 20:52:54.009440 2230 topology_manager.go:215] "Topology Admit Handler" podUID="13c220498939fce3ad591d950765b8f3" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.012600 kubelet[2230]: I1112 20:52:54.011550 2230 topology_manager.go:215] "Topology Admit Handler" podUID="f59ea426fb0621d0bad86676ee1d5f23" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.061531 systemd[1]: Created slice kubepods-burstable-pod4c335f70bb362172bf0308dc9f159044.slice - libcontainer container kubepods-burstable-pod4c335f70bb362172bf0308dc9f159044.slice. Nov 12 20:52:54.114231 systemd[1]: Created slice kubepods-burstable-pod13c220498939fce3ad591d950765b8f3.slice - libcontainer container kubepods-burstable-pod13c220498939fce3ad591d950765b8f3.slice. Nov 12 20:52:54.163831 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2267) Nov 12 20:52:54.164090 kubelet[2230]: I1112 20:52:54.162633 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164090 kubelet[2230]: I1112 20:52:54.162702 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164090 kubelet[2230]: I1112 20:52:54.162752 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13c220498939fce3ad591d950765b8f3-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-ee124ee133\" (UID: \"13c220498939fce3ad591d950765b8f3\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164090 kubelet[2230]: I1112 20:52:54.162814 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164090 kubelet[2230]: I1112 20:52:54.162852 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164934 kubelet[2230]: I1112 20:52:54.162889 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164934 kubelet[2230]: I1112 20:52:54.162917 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164934 kubelet[2230]: I1112 20:52:54.162953 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.164934 kubelet[2230]: I1112 20:52:54.162987 2230 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.189290 systemd[1]: Created slice kubepods-burstable-podf59ea426fb0621d0bad86676ee1d5f23.slice - libcontainer container kubepods-burstable-podf59ea426fb0621d0bad86676ee1d5f23.slice. Nov 12 20:52:54.253487 kubelet[2230]: E1112 20:52:54.253436 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.81.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-ee124ee133?timeout=10s\": dial tcp 137.184.81.153:6443: connect: connection refused" interval="800ms" Nov 12 20:52:54.286540 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2270) Nov 12 20:52:54.371749 kubelet[2230]: I1112 20:52:54.371585 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.374488 kubelet[2230]: E1112 20:52:54.374183 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.81.153:6443/api/v1/nodes\": dial tcp 137.184.81.153:6443: connect: connection refused" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:54.393769 kubelet[2230]: E1112 20:52:54.393711 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:54.406796 containerd[1483]: time="2024-11-12T20:52:54.406043753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-ee124ee133,Uid:4c335f70bb362172bf0308dc9f159044,Namespace:kube-system,Attempt:0,}" Nov 12 20:52:54.479959 kubelet[2230]: E1112 20:52:54.479533 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:54.480696 containerd[1483]: time="2024-11-12T20:52:54.480284841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-ee124ee133,Uid:13c220498939fce3ad591d950765b8f3,Namespace:kube-system,Attempt:0,}" Nov 12 20:52:54.500165 kubelet[2230]: E1112 20:52:54.500098 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:54.502719 containerd[1483]: time="2024-11-12T20:52:54.501071181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-ee124ee133,Uid:f59ea426fb0621d0bad86676ee1d5f23,Namespace:kube-system,Attempt:0,}" Nov 12 20:52:54.670017 kubelet[2230]: W1112 20:52:54.669806 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.81.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:54.672078 kubelet[2230]: E1112 20:52:54.672028 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.81.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:54.732607 kubelet[2230]: W1112 20:52:54.732506 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.81.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:54.732934 kubelet[2230]: E1112 20:52:54.732903 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.81.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:54.863868 kubelet[2230]: W1112 20:52:54.863791 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.81.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:54.864238 kubelet[2230]: E1112 20:52:54.864174 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.81.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:55.054942 kubelet[2230]: E1112 20:52:55.054869 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.81.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-ee124ee133?timeout=10s\": dial tcp 137.184.81.153:6443: connect: connection refused" interval="1.6s" Nov 12 20:52:55.066461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617156987.mount: Deactivated successfully. Nov 12 20:52:55.079256 containerd[1483]: time="2024-11-12T20:52:55.077423388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:52:55.083127 containerd[1483]: time="2024-11-12T20:52:55.082820925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:52:55.086855 containerd[1483]: time="2024-11-12T20:52:55.085252004Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:52:55.089809 containerd[1483]: time="2024-11-12T20:52:55.088534436Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:52:55.094890 containerd[1483]: time="2024-11-12T20:52:55.094593680Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:52:55.096795 containerd[1483]: time="2024-11-12T20:52:55.096720809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:52:55.099663 containerd[1483]: time="2024-11-12T20:52:55.098091057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:52:55.099663 containerd[1483]: time="2024-11-12T20:52:55.099560331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.778892ms" Nov 12 20:52:55.101671 containerd[1483]: time="2024-11-12T20:52:55.101111115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:52:55.109682 containerd[1483]: time="2024-11-12T20:52:55.108224306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.05446ms" Nov 12 20:52:55.109682 containerd[1483]: time="2024-11-12T20:52:55.109144371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 607.946785ms" Nov 12 20:52:55.140671 kubelet[2230]: W1112 20:52:55.140497 2230 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.81.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-ee124ee133&limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:55.140671 kubelet[2230]: E1112 20:52:55.140574 2230 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.81.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-ee124ee133&limit=500&resourceVersion=0": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:55.200695 kubelet[2230]: I1112 20:52:55.200106 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:55.200695 kubelet[2230]: E1112 20:52:55.200657 2230 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.81.153:6443/api/v1/nodes\": dial tcp 137.184.81.153:6443: connect: connection refused" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:55.439112 containerd[1483]: time="2024-11-12T20:52:55.438499744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:55.441251 containerd[1483]: time="2024-11-12T20:52:55.439044132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:55.441519 containerd[1483]: time="2024-11-12T20:52:55.441189025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.441827 containerd[1483]: time="2024-11-12T20:52:55.441665492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.447226 containerd[1483]: time="2024-11-12T20:52:55.447037635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:55.447608 containerd[1483]: time="2024-11-12T20:52:55.447047891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:52:55.447608 containerd[1483]: time="2024-11-12T20:52:55.447152952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:55.447608 containerd[1483]: time="2024-11-12T20:52:55.447180912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.447608 containerd[1483]: time="2024-11-12T20:52:55.447328638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.448836 containerd[1483]: time="2024-11-12T20:52:55.448734316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:52:55.449938 containerd[1483]: time="2024-11-12T20:52:55.449827762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.450928 containerd[1483]: time="2024-11-12T20:52:55.450257546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:52:55.490851 systemd[1]: Started cri-containerd-4a95fb82f2e076e21fb60621b98eb6360a61f6684b7d48241f7633d1cbba698f.scope - libcontainer container 4a95fb82f2e076e21fb60621b98eb6360a61f6684b7d48241f7633d1cbba698f. Nov 12 20:52:55.508374 systemd[1]: Started cri-containerd-91066767cba93e61081df47df777f8cca1fac0334972a74acefe055bd9ef6c6b.scope - libcontainer container 91066767cba93e61081df47df777f8cca1fac0334972a74acefe055bd9ef6c6b. Nov 12 20:52:55.538953 systemd[1]: Started cri-containerd-eb7366341c46a0bb27636605a03060df9b1ae334bc375cbbd3303ff698491037.scope - libcontainer container eb7366341c46a0bb27636605a03060df9b1ae334bc375cbbd3303ff698491037. Nov 12 20:52:55.605331 kubelet[2230]: E1112 20:52:55.605279 2230 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.81.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.81.153:6443: connect: connection refused Nov 12 20:52:55.672661 containerd[1483]: time="2024-11-12T20:52:55.672409741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-ee124ee133,Uid:4c335f70bb362172bf0308dc9f159044,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7366341c46a0bb27636605a03060df9b1ae334bc375cbbd3303ff698491037\"" Nov 12 20:52:55.673727 containerd[1483]: time="2024-11-12T20:52:55.672798569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-ee124ee133,Uid:f59ea426fb0621d0bad86676ee1d5f23,Namespace:kube-system,Attempt:0,} returns sandbox id \"91066767cba93e61081df47df777f8cca1fac0334972a74acefe055bd9ef6c6b\"" Nov 12 20:52:55.679518 kubelet[2230]: E1112 20:52:55.679419 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:55.680125 kubelet[2230]: E1112 20:52:55.679998 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:55.698885 containerd[1483]: time="2024-11-12T20:52:55.698440453Z" level=info msg="CreateContainer within sandbox \"91066767cba93e61081df47df777f8cca1fac0334972a74acefe055bd9ef6c6b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:52:55.706873 containerd[1483]: time="2024-11-12T20:52:55.706772001Z" level=info msg="CreateContainer within sandbox \"eb7366341c46a0bb27636605a03060df9b1ae334bc375cbbd3303ff698491037\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:52:55.718889 containerd[1483]: time="2024-11-12T20:52:55.718815459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-ee124ee133,Uid:13c220498939fce3ad591d950765b8f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a95fb82f2e076e21fb60621b98eb6360a61f6684b7d48241f7633d1cbba698f\"" Nov 12 20:52:55.722748 kubelet[2230]: E1112 20:52:55.722558 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:55.732680 containerd[1483]: time="2024-11-12T20:52:55.732613784Z" level=info msg="CreateContainer within sandbox \"4a95fb82f2e076e21fb60621b98eb6360a61f6684b7d48241f7633d1cbba698f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:52:55.793249 containerd[1483]: time="2024-11-12T20:52:55.793180950Z" level=info msg="CreateContainer within sandbox \"eb7366341c46a0bb27636605a03060df9b1ae334bc375cbbd3303ff698491037\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e988eaf4cae92816c668b45ab3161c9e55588d457112a71cb6362b9b8d25b21e\"" Nov 12 20:52:55.797061 containerd[1483]: time="2024-11-12T20:52:55.794794059Z" level=info msg="StartContainer for \"e988eaf4cae92816c668b45ab3161c9e55588d457112a71cb6362b9b8d25b21e\"" Nov 12 20:52:55.812530 containerd[1483]: time="2024-11-12T20:52:55.812442191Z" level=info msg="CreateContainer within sandbox \"91066767cba93e61081df47df777f8cca1fac0334972a74acefe055bd9ef6c6b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a65ac8ef527340250deda2c387110d99737da76dda2179dfa85c2ff09f045cb6\"" Nov 12 20:52:55.818689 containerd[1483]: time="2024-11-12T20:52:55.816939079Z" level=info msg="StartContainer for \"a65ac8ef527340250deda2c387110d99737da76dda2179dfa85c2ff09f045cb6\"" Nov 12 20:52:55.825128 containerd[1483]: time="2024-11-12T20:52:55.824946511Z" level=info msg="CreateContainer within sandbox \"4a95fb82f2e076e21fb60621b98eb6360a61f6684b7d48241f7633d1cbba698f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9449c894781ae34cdd2505fe3c5748ba7eb070935dd55afa46c18e32d296c3eb\"" Nov 12 20:52:55.849249 containerd[1483]: time="2024-11-12T20:52:55.849194000Z" level=info msg="StartContainer for \"9449c894781ae34cdd2505fe3c5748ba7eb070935dd55afa46c18e32d296c3eb\"" Nov 12 20:52:55.907904 systemd[1]: Started cri-containerd-e988eaf4cae92816c668b45ab3161c9e55588d457112a71cb6362b9b8d25b21e.scope - libcontainer container e988eaf4cae92816c668b45ab3161c9e55588d457112a71cb6362b9b8d25b21e. Nov 12 20:52:55.945949 systemd[1]: Started cri-containerd-a65ac8ef527340250deda2c387110d99737da76dda2179dfa85c2ff09f045cb6.scope - libcontainer container a65ac8ef527340250deda2c387110d99737da76dda2179dfa85c2ff09f045cb6. Nov 12 20:52:55.976681 systemd[1]: Started cri-containerd-9449c894781ae34cdd2505fe3c5748ba7eb070935dd55afa46c18e32d296c3eb.scope - libcontainer container 9449c894781ae34cdd2505fe3c5748ba7eb070935dd55afa46c18e32d296c3eb. Nov 12 20:52:56.041966 containerd[1483]: time="2024-11-12T20:52:56.041900240Z" level=info msg="StartContainer for \"e988eaf4cae92816c668b45ab3161c9e55588d457112a71cb6362b9b8d25b21e\" returns successfully" Nov 12 20:52:56.094382 containerd[1483]: time="2024-11-12T20:52:56.094288794Z" level=info msg="StartContainer for \"a65ac8ef527340250deda2c387110d99737da76dda2179dfa85c2ff09f045cb6\" returns successfully" Nov 12 20:52:56.153584 containerd[1483]: time="2024-11-12T20:52:56.153504453Z" level=info msg="StartContainer for \"9449c894781ae34cdd2505fe3c5748ba7eb070935dd55afa46c18e32d296c3eb\" returns successfully" Nov 12 20:52:56.658669 kubelet[2230]: E1112 20:52:56.656377 2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.81.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-ee124ee133?timeout=10s\": dial tcp 137.184.81.153:6443: connect: connection refused" interval="3.2s" Nov 12 20:52:56.757188 kubelet[2230]: E1112 20:52:56.757019 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:56.763337 kubelet[2230]: E1112 20:52:56.763259 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:56.773759 kubelet[2230]: E1112 20:52:56.772871 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:56.810848 kubelet[2230]: I1112 20:52:56.805250 2230 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:57.774114 kubelet[2230]: E1112 20:52:57.773274 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:57.778123 kubelet[2230]: E1112 20:52:57.777248 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:57.778123 kubelet[2230]: E1112 20:52:57.777877 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:58.783448 kubelet[2230]: E1112 20:52:58.783343 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:59.155734 kubelet[2230]: I1112 20:52:59.155340 2230 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:52:59.455490 kubelet[2230]: E1112 20:52:59.454266 2230 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:52:59.455490 kubelet[2230]: E1112 20:52:59.455084 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:52:59.585969 kubelet[2230]: I1112 20:52:59.585824 2230 apiserver.go:52] "Watching apiserver" Nov 12 20:52:59.646741 kubelet[2230]: I1112 20:52:59.646660 2230 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:53:04.343667 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-7.scope)... Nov 12 20:53:04.343695 systemd[1]: Reloading... Nov 12 20:53:04.351949 kubelet[2230]: W1112 20:53:04.350641 2230 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:53:04.353479 kubelet[2230]: E1112 20:53:04.352490 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:04.581676 zram_generator::config[2555]: No configuration found. Nov 12 20:53:04.822290 kubelet[2230]: E1112 20:53:04.822140 2230 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:04.869530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:05.002184 systemd[1]: Reloading finished in 657 ms. Nov 12 20:53:05.088162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:05.093396 kubelet[2230]: I1112 20:53:05.088712 2230 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:53:05.113688 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:53:05.114391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:05.114482 systemd[1]: kubelet.service: Consumed 1.780s CPU time, 110.5M memory peak, 0B memory swap peak. Nov 12 20:53:05.129900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:05.429357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:05.449716 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:53:05.591674 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:05.595782 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:53:05.595782 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:53:05.596205 kubelet[2605]: I1112 20:53:05.593869 2605 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:53:05.608777 kubelet[2605]: I1112 20:53:05.608411 2605 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:53:05.608777 kubelet[2605]: I1112 20:53:05.608468 2605 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:53:05.610001 kubelet[2605]: I1112 20:53:05.609893 2605 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:53:05.615312 kubelet[2605]: I1112 20:53:05.615169 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:53:05.622587 kubelet[2605]: I1112 20:53:05.622519 2605 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.639901 2605 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.640339 2605 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.640786 2605 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.640861 2605 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.640879 2605 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:53:05.651205 kubelet[2605]: I1112 20:53:05.640954 2605 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:05.651776 kubelet[2605]: I1112 20:53:05.641149 2605 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:53:05.651776 kubelet[2605]: I1112 20:53:05.641177 2605 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:53:05.651776 kubelet[2605]: I1112 20:53:05.641220 2605 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:53:05.651776 kubelet[2605]: I1112 20:53:05.641246 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:53:05.659678 kubelet[2605]: I1112 20:53:05.657347 2605 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:53:05.659678 kubelet[2605]: I1112 20:53:05.658522 2605 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:53:05.674912 kubelet[2605]: I1112 20:53:05.672895 2605 server.go:1256] "Started kubelet" Nov 12 20:53:05.703481 kubelet[2605]: I1112 20:53:05.701030 2605 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:53:05.709554 kubelet[2605]: I1112 20:53:05.707224 2605 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:53:05.710659 kubelet[2605]: I1112 20:53:05.710483 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:53:05.719725 kubelet[2605]: I1112 20:53:05.719480 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:53:05.735582 kubelet[2605]: I1112 20:53:05.721972 2605 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:53:05.735582 kubelet[2605]: I1112 20:53:05.728716 2605 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:53:05.735582 kubelet[2605]: I1112 20:53:05.728922 2605 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:53:05.735582 kubelet[2605]: I1112 20:53:05.732329 2605 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:53:05.753037 kubelet[2605]: I1112 20:53:05.753000 2605 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:53:05.755966 kubelet[2605]: E1112 20:53:05.755034 2605 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:53:05.755966 kubelet[2605]: I1112 20:53:05.755093 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:53:05.769953 kubelet[2605]: I1112 20:53:05.767739 2605 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:53:05.807178 sudo[2623]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:53:05.807859 sudo[2623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:53:05.832837 kubelet[2605]: I1112 20:53:05.831445 2605 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:53:05.843388 kubelet[2605]: I1112 20:53:05.843139 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:53:05.871118 kubelet[2605]: I1112 20:53:05.870652 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:53:05.871118 kubelet[2605]: I1112 20:53:05.870721 2605 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:53:05.871118 kubelet[2605]: I1112 20:53:05.870753 2605 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:53:05.871118 kubelet[2605]: E1112 20:53:05.870850 2605 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:53:05.874953 kubelet[2605]: I1112 20:53:05.874534 2605 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:53:05.874953 kubelet[2605]: I1112 20:53:05.874878 2605 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-ee124ee133" Nov 12 20:53:05.973755 kubelet[2605]: E1112 20:53:05.973469 2605 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:53:06.001017 kubelet[2605]: I1112 20:53:06.000963 2605 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:53:06.001318 kubelet[2605]: I1112 20:53:06.001079 2605 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:53:06.001318 kubelet[2605]: I1112 20:53:06.001108 2605 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:53:06.007436 kubelet[2605]: I1112 20:53:06.001719 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:53:06.007436 kubelet[2605]: I1112 20:53:06.001771 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:53:06.007436 kubelet[2605]: I1112 20:53:06.001781 2605 policy_none.go:49] "None policy: Start" Nov 12 20:53:06.010973 kubelet[2605]: I1112 20:53:06.010762 2605 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:53:06.010973 kubelet[2605]: I1112 20:53:06.010838 2605 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:53:06.013058 kubelet[2605]: I1112 20:53:06.011525 2605 state_mem.go:75] "Updated machine memory state" Nov 12 20:53:06.037585 kubelet[2605]: I1112 20:53:06.036994 2605 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:53:06.044211 kubelet[2605]: I1112 20:53:06.043322 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:53:06.178689 kubelet[2605]: I1112 20:53:06.175249 2605 topology_manager.go:215] "Topology Admit Handler" podUID="f59ea426fb0621d0bad86676ee1d5f23" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.178689 kubelet[2605]: I1112 20:53:06.175427 2605 topology_manager.go:215] "Topology Admit Handler" podUID="4c335f70bb362172bf0308dc9f159044" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.178689 kubelet[2605]: I1112 20:53:06.175489 2605 topology_manager.go:215] "Topology Admit Handler" podUID="13c220498939fce3ad591d950765b8f3" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.202289 kubelet[2605]: W1112 20:53:06.202246 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:53:06.213532 kubelet[2605]: W1112 20:53:06.213466 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:53:06.223972 kubelet[2605]: W1112 20:53:06.223530 2605 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:53:06.225170 kubelet[2605]: E1112 20:53:06.225109 2605 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.0-a-ee124ee133\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.237326 kubelet[2605]: I1112 20:53:06.237259 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.237850 kubelet[2605]: I1112 20:53:06.237714 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.237850 kubelet[2605]: I1112 20:53:06.237803 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238321 kubelet[2605]: I1112 20:53:06.238109 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238321 kubelet[2605]: I1112 20:53:06.238197 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238321 kubelet[2605]: I1112 20:53:06.238277 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13c220498939fce3ad591d950765b8f3-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-ee124ee133\" (UID: \"13c220498939fce3ad591d950765b8f3\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238725 kubelet[2605]: I1112 20:53:06.238518 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59ea426fb0621d0bad86676ee1d5f23-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-ee124ee133\" (UID: \"f59ea426fb0621d0bad86676ee1d5f23\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238725 kubelet[2605]: I1112 20:53:06.238591 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.238725 kubelet[2605]: I1112 20:53:06.238670 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c335f70bb362172bf0308dc9f159044-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-ee124ee133\" (UID: \"4c335f70bb362172bf0308dc9f159044\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" Nov 12 20:53:06.508391 kubelet[2605]: E1112 20:53:06.507839 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.517430 kubelet[2605]: E1112 20:53:06.515074 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.526655 kubelet[2605]: E1112 20:53:06.526498 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.662274 kubelet[2605]: I1112 20:53:06.660478 2605 apiserver.go:52] "Watching apiserver" Nov 12 20:53:06.729911 kubelet[2605]: I1112 20:53:06.729828 2605 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:53:06.772474 kubelet[2605]: I1112 20:53:06.769115 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-a-ee124ee133" podStartSLOduration=0.768851771 podStartE2EDuration="768.851771ms" podCreationTimestamp="2024-11-12 20:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:06.736860886 +0000 UTC m=+1.258075457" watchObservedRunningTime="2024-11-12 20:53:06.768851771 +0000 UTC m=+1.290066347" Nov 12 20:53:06.772474 kubelet[2605]: I1112 20:53:06.769552 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-ee124ee133" podStartSLOduration=0.76950394 podStartE2EDuration="769.50394ms" podCreationTimestamp="2024-11-12 20:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:06.768072525 +0000 UTC m=+1.289287104" watchObservedRunningTime="2024-11-12 20:53:06.76950394 +0000 UTC m=+1.290718515" Nov 12 20:53:06.793318 kubelet[2605]: I1112 20:53:06.792912 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-a-ee124ee133" podStartSLOduration=2.792847643 podStartE2EDuration="2.792847643s" podCreationTimestamp="2024-11-12 20:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:06.792249476 +0000 UTC m=+1.313464560" watchObservedRunningTime="2024-11-12 20:53:06.792847643 +0000 UTC m=+1.314062221" Nov 12 20:53:06.950858 kubelet[2605]: E1112 20:53:06.950347 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.952758 kubelet[2605]: E1112 20:53:06.951516 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.953685 kubelet[2605]: E1112 20:53:06.952643 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:06.969513 sudo[2623]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:07.954398 kubelet[2605]: E1112 20:53:07.952722 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:07.958022 kubelet[2605]: E1112 20:53:07.957362 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:09.185439 kubelet[2605]: E1112 20:53:09.185395 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:09.852033 sudo[1660]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:09.867030 sshd[1657]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:09.878132 systemd[1]: sshd@6-137.184.81.153:22-139.178.68.195:56568.service: Deactivated successfully. Nov 12 20:53:09.887202 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:53:09.888160 systemd[1]: session-7.scope: Consumed 7.970s CPU time, 189.5M memory peak, 0B memory swap peak. Nov 12 20:53:09.891611 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:53:09.894268 systemd-logind[1453]: Removed session 7. Nov 12 20:53:09.974361 kubelet[2605]: E1112 20:53:09.974061 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:15.020202 kubelet[2605]: E1112 20:53:15.019873 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:15.990825 kubelet[2605]: E1112 20:53:15.990770 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:16.725549 kubelet[2605]: E1112 20:53:16.725464 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:17.556296 kubelet[2605]: I1112 20:53:17.556072 2605 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:53:17.557777 containerd[1483]: time="2024-11-12T20:53:17.556877000Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:53:17.558347 kubelet[2605]: I1112 20:53:17.557210 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:53:18.493221 kubelet[2605]: I1112 20:53:18.493171 2605 topology_manager.go:215] "Topology Admit Handler" podUID="95897e02-b56c-4f89-a70a-22670429f747" podNamespace="kube-system" podName="cilium-s7ccm" Nov 12 20:53:18.518905 systemd[1]: Created slice kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice - libcontainer container kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice. Nov 12 20:53:18.530412 kubelet[2605]: I1112 20:53:18.529727 2605 topology_manager.go:215] "Topology Admit Handler" podUID="2292f515-5882-430a-ac42-c3fb19f0ea71" podNamespace="kube-system" podName="kube-proxy-xm6hv" Nov 12 20:53:18.564125 systemd[1]: Created slice kubepods-besteffort-pod2292f515_5882_430a_ac42_c3fb19f0ea71.slice - libcontainer container kubepods-besteffort-pod2292f515_5882_430a_ac42_c3fb19f0ea71.slice. Nov 12 20:53:18.586068 kubelet[2605]: I1112 20:53:18.586014 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-cgroup\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586068 kubelet[2605]: I1112 20:53:18.586092 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-hubble-tls\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586130 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-run\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586193 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95897e02-b56c-4f89-a70a-22670429f747-cilium-config-path\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586233 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-kernel\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586268 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsht2\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-kube-api-access-dsht2\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586308 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cni-path\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586504 kubelet[2605]: I1112 20:53:18.586341 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-lib-modules\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586789 kubelet[2605]: I1112 20:53:18.586374 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-xtables-lock\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586789 kubelet[2605]: I1112 20:53:18.586407 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95897e02-b56c-4f89-a70a-22670429f747-clustermesh-secrets\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586789 kubelet[2605]: I1112 20:53:18.586443 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2292f515-5882-430a-ac42-c3fb19f0ea71-xtables-lock\") pod \"kube-proxy-xm6hv\" (UID: \"2292f515-5882-430a-ac42-c3fb19f0ea71\") " pod="kube-system/kube-proxy-xm6hv" Nov 12 20:53:18.586789 kubelet[2605]: I1112 20:53:18.586479 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-bpf-maps\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586789 kubelet[2605]: I1112 20:53:18.586516 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-net\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586917 kubelet[2605]: I1112 20:53:18.586553 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfst\" (UniqueName: \"kubernetes.io/projected/2292f515-5882-430a-ac42-c3fb19f0ea71-kube-api-access-5bfst\") pod \"kube-proxy-xm6hv\" (UID: \"2292f515-5882-430a-ac42-c3fb19f0ea71\") " pod="kube-system/kube-proxy-xm6hv" Nov 12 20:53:18.586917 kubelet[2605]: I1112 20:53:18.586660 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-hostproc\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586917 kubelet[2605]: I1112 20:53:18.586723 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-etc-cni-netd\") pod \"cilium-s7ccm\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " pod="kube-system/cilium-s7ccm" Nov 12 20:53:18.586917 kubelet[2605]: I1112 20:53:18.586781 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2292f515-5882-430a-ac42-c3fb19f0ea71-kube-proxy\") pod \"kube-proxy-xm6hv\" (UID: \"2292f515-5882-430a-ac42-c3fb19f0ea71\") " pod="kube-system/kube-proxy-xm6hv" Nov 12 20:53:18.586917 kubelet[2605]: I1112 20:53:18.586823 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2292f515-5882-430a-ac42-c3fb19f0ea71-lib-modules\") pod \"kube-proxy-xm6hv\" (UID: \"2292f515-5882-430a-ac42-c3fb19f0ea71\") " pod="kube-system/kube-proxy-xm6hv" Nov 12 20:53:18.671967 kubelet[2605]: I1112 20:53:18.671346 2605 topology_manager.go:215] "Topology Admit Handler" podUID="1abb8aca-ee80-49d4-bce6-f522ea5756c3" podNamespace="kube-system" podName="cilium-operator-5cc964979-7h6rf" Nov 12 20:53:18.686827 systemd[1]: Created slice kubepods-besteffort-pod1abb8aca_ee80_49d4_bce6_f522ea5756c3.slice - libcontainer container kubepods-besteffort-pod1abb8aca_ee80_49d4_bce6_f522ea5756c3.slice. Nov 12 20:53:18.688328 kubelet[2605]: I1112 20:53:18.687991 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1abb8aca-ee80-49d4-bce6-f522ea5756c3-cilium-config-path\") pod \"cilium-operator-5cc964979-7h6rf\" (UID: \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\") " pod="kube-system/cilium-operator-5cc964979-7h6rf" Nov 12 20:53:18.688328 kubelet[2605]: I1112 20:53:18.688034 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhxw\" (UniqueName: \"kubernetes.io/projected/1abb8aca-ee80-49d4-bce6-f522ea5756c3-kube-api-access-hlhxw\") pod \"cilium-operator-5cc964979-7h6rf\" (UID: \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\") " pod="kube-system/cilium-operator-5cc964979-7h6rf" Nov 12 20:53:18.837821 kubelet[2605]: E1112 20:53:18.836518 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:18.842265 containerd[1483]: time="2024-11-12T20:53:18.839438824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7ccm,Uid:95897e02-b56c-4f89-a70a-22670429f747,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:18.879827 kubelet[2605]: E1112 20:53:18.878967 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:18.883657 containerd[1483]: time="2024-11-12T20:53:18.883263944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xm6hv,Uid:2292f515-5882-430a-ac42-c3fb19f0ea71,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:18.958444 containerd[1483]: time="2024-11-12T20:53:18.956582678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:18.959579 containerd[1483]: time="2024-11-12T20:53:18.958872030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:18.960027 containerd[1483]: time="2024-11-12T20:53:18.959550540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:18.967918 containerd[1483]: time="2024-11-12T20:53:18.965776362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:18.976674 containerd[1483]: time="2024-11-12T20:53:18.975842370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:18.976674 containerd[1483]: time="2024-11-12T20:53:18.975981002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:18.976674 containerd[1483]: time="2024-11-12T20:53:18.976010721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:18.976996 containerd[1483]: time="2024-11-12T20:53:18.976156657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:19.023706 kubelet[2605]: E1112 20:53:19.023312 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:19.027522 containerd[1483]: time="2024-11-12T20:53:19.026663463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7h6rf,Uid:1abb8aca-ee80-49d4-bce6-f522ea5756c3,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:19.030526 systemd[1]: Started cri-containerd-da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd.scope - libcontainer container da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd. Nov 12 20:53:19.043975 systemd[1]: Started cri-containerd-e8977ee17f4037d6ce10feda60c2bf87d4a7257f2c6faf047f6231e4bb805825.scope - libcontainer container e8977ee17f4037d6ce10feda60c2bf87d4a7257f2c6faf047f6231e4bb805825. Nov 12 20:53:19.132278 containerd[1483]: time="2024-11-12T20:53:19.131660906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:19.132656 containerd[1483]: time="2024-11-12T20:53:19.131994436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:19.132656 containerd[1483]: time="2024-11-12T20:53:19.132028503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:19.132656 containerd[1483]: time="2024-11-12T20:53:19.132593289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:19.148164 containerd[1483]: time="2024-11-12T20:53:19.147676410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7ccm,Uid:95897e02-b56c-4f89-a70a-22670429f747,Namespace:kube-system,Attempt:0,} returns sandbox id \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\"" Nov 12 20:53:19.152459 kubelet[2605]: E1112 20:53:19.151781 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:19.160852 containerd[1483]: time="2024-11-12T20:53:19.160732327Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:53:19.170767 containerd[1483]: time="2024-11-12T20:53:19.170322156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xm6hv,Uid:2292f515-5882-430a-ac42-c3fb19f0ea71,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8977ee17f4037d6ce10feda60c2bf87d4a7257f2c6faf047f6231e4bb805825\"" Nov 12 20:53:19.174543 kubelet[2605]: E1112 20:53:19.174216 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:19.186446 containerd[1483]: time="2024-11-12T20:53:19.186242949Z" level=info msg="CreateContainer within sandbox \"e8977ee17f4037d6ce10feda60c2bf87d4a7257f2c6faf047f6231e4bb805825\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:53:19.209743 systemd[1]: Started cri-containerd-33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0.scope - libcontainer container 33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0. Nov 12 20:53:19.251769 containerd[1483]: time="2024-11-12T20:53:19.251700497Z" level=info msg="CreateContainer within sandbox \"e8977ee17f4037d6ce10feda60c2bf87d4a7257f2c6faf047f6231e4bb805825\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43a5af8c68df54a60f40125169337af14f4f5fa88c91e23ed342f716bc8c1a94\"" Nov 12 20:53:19.256382 containerd[1483]: time="2024-11-12T20:53:19.255923144Z" level=info msg="StartContainer for \"43a5af8c68df54a60f40125169337af14f4f5fa88c91e23ed342f716bc8c1a94\"" Nov 12 20:53:19.318022 systemd[1]: Started cri-containerd-43a5af8c68df54a60f40125169337af14f4f5fa88c91e23ed342f716bc8c1a94.scope - libcontainer container 43a5af8c68df54a60f40125169337af14f4f5fa88c91e23ed342f716bc8c1a94. Nov 12 20:53:19.365496 containerd[1483]: time="2024-11-12T20:53:19.365364182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7h6rf,Uid:1abb8aca-ee80-49d4-bce6-f522ea5756c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\"" Nov 12 20:53:19.369115 kubelet[2605]: E1112 20:53:19.368737 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:19.407977 containerd[1483]: time="2024-11-12T20:53:19.407194330Z" level=info msg="StartContainer for \"43a5af8c68df54a60f40125169337af14f4f5fa88c91e23ed342f716bc8c1a94\" returns successfully" Nov 12 20:53:20.025672 kubelet[2605]: E1112 20:53:20.023716 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:28.129889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440531118.mount: Deactivated successfully. Nov 12 20:53:33.902236 containerd[1483]: time="2024-11-12T20:53:33.902112820Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:33.904041 containerd[1483]: time="2024-11-12T20:53:33.903235040Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735395" Nov 12 20:53:33.905559 containerd[1483]: time="2024-11-12T20:53:33.905469267Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:33.909682 containerd[1483]: time="2024-11-12T20:53:33.909560535Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.747835388s" Nov 12 20:53:33.909682 containerd[1483]: time="2024-11-12T20:53:33.909664077Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:53:33.925556 containerd[1483]: time="2024-11-12T20:53:33.925494596Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:53:34.003775 containerd[1483]: time="2024-11-12T20:53:33.995923149Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:53:34.230696 containerd[1483]: time="2024-11-12T20:53:34.228610445Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\"" Nov 12 20:53:34.241061 containerd[1483]: time="2024-11-12T20:53:34.236019593Z" level=info msg="StartContainer for \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\"" Nov 12 20:53:34.465656 systemd[1]: run-containerd-runc-k8s.io-239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3-runc.i55RRr.mount: Deactivated successfully. Nov 12 20:53:34.482921 systemd[1]: Started cri-containerd-239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3.scope - libcontainer container 239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3. Nov 12 20:53:34.564608 containerd[1483]: time="2024-11-12T20:53:34.564534854Z" level=info msg="StartContainer for \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\" returns successfully" Nov 12 20:53:34.596185 systemd[1]: cri-containerd-239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3.scope: Deactivated successfully. Nov 12 20:53:34.904169 containerd[1483]: time="2024-11-12T20:53:34.851204520Z" level=info msg="shim disconnected" id=239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3 namespace=k8s.io Nov 12 20:53:34.904169 containerd[1483]: time="2024-11-12T20:53:34.901401277Z" level=warning msg="cleaning up after shim disconnected" id=239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3 namespace=k8s.io Nov 12 20:53:34.904169 containerd[1483]: time="2024-11-12T20:53:34.901432688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:35.198287 kubelet[2605]: E1112 20:53:35.189919 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:35.209410 containerd[1483]: time="2024-11-12T20:53:35.207772668Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:53:35.229326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3-rootfs.mount: Deactivated successfully. Nov 12 20:53:35.331451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614812705.mount: Deactivated successfully. Nov 12 20:53:35.336070 kubelet[2605]: I1112 20:53:35.333200 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xm6hv" podStartSLOduration=17.333123274 podStartE2EDuration="17.333123274s" podCreationTimestamp="2024-11-12 20:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:20.052883713 +0000 UTC m=+14.574098287" watchObservedRunningTime="2024-11-12 20:53:35.333123274 +0000 UTC m=+29.854337860" Nov 12 20:53:35.337383 containerd[1483]: time="2024-11-12T20:53:35.337333885Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\"" Nov 12 20:53:35.341461 containerd[1483]: time="2024-11-12T20:53:35.340133102Z" level=info msg="StartContainer for \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\"" Nov 12 20:53:35.441119 systemd[1]: Started cri-containerd-b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228.scope - libcontainer container b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228. Nov 12 20:53:35.515175 containerd[1483]: time="2024-11-12T20:53:35.515113320Z" level=info msg="StartContainer for \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\" returns successfully" Nov 12 20:53:35.554725 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:35.555164 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:35.556196 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:35.568550 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:35.568999 systemd[1]: cri-containerd-b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228.scope: Deactivated successfully. Nov 12 20:53:35.656534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:35.767246 containerd[1483]: time="2024-11-12T20:53:35.766571200Z" level=info msg="shim disconnected" id=b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228 namespace=k8s.io Nov 12 20:53:35.768170 containerd[1483]: time="2024-11-12T20:53:35.767837294Z" level=warning msg="cleaning up after shim disconnected" id=b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228 namespace=k8s.io Nov 12 20:53:35.768364 containerd[1483]: time="2024-11-12T20:53:35.768337668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:35.889265 containerd[1483]: time="2024-11-12T20:53:35.889186599Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:53:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:53:36.245679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228-rootfs.mount: Deactivated successfully. Nov 12 20:53:36.264787 kubelet[2605]: E1112 20:53:36.264159 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:36.287265 containerd[1483]: time="2024-11-12T20:53:36.287192624Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:53:36.424252 containerd[1483]: time="2024-11-12T20:53:36.423069789Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\"" Nov 12 20:53:36.427089 containerd[1483]: time="2024-11-12T20:53:36.426939100Z" level=info msg="StartContainer for \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\"" Nov 12 20:53:36.530612 systemd[1]: Started cri-containerd-015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c.scope - libcontainer container 015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c. Nov 12 20:53:36.620408 containerd[1483]: time="2024-11-12T20:53:36.620214343Z" level=info msg="StartContainer for \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\" returns successfully" Nov 12 20:53:36.632136 systemd[1]: cri-containerd-015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c.scope: Deactivated successfully. Nov 12 20:53:36.748901 containerd[1483]: time="2024-11-12T20:53:36.748796109Z" level=info msg="shim disconnected" id=015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c namespace=k8s.io Nov 12 20:53:36.749469 containerd[1483]: time="2024-11-12T20:53:36.749042024Z" level=warning msg="cleaning up after shim disconnected" id=015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c namespace=k8s.io Nov 12 20:53:36.749469 containerd[1483]: time="2024-11-12T20:53:36.749063816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:37.234471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c-rootfs.mount: Deactivated successfully. Nov 12 20:53:37.286377 kubelet[2605]: E1112 20:53:37.283475 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:37.314990 containerd[1483]: time="2024-11-12T20:53:37.314918651Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:53:37.427385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714765092.mount: Deactivated successfully. Nov 12 20:53:37.435543 containerd[1483]: time="2024-11-12T20:53:37.435300835Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\"" Nov 12 20:53:37.439389 containerd[1483]: time="2024-11-12T20:53:37.436263011Z" level=info msg="StartContainer for \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\"" Nov 12 20:53:37.540121 systemd[1]: Started cri-containerd-ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2.scope - libcontainer container ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2. Nov 12 20:53:37.625464 systemd[1]: cri-containerd-ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2.scope: Deactivated successfully. Nov 12 20:53:37.642387 containerd[1483]: time="2024-11-12T20:53:37.642265544Z" level=info msg="StartContainer for \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\" returns successfully" Nov 12 20:53:37.650047 containerd[1483]: time="2024-11-12T20:53:37.632007340Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice/cri-containerd-ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2.scope/memory.events\": no such file or directory" Nov 12 20:53:37.706155 containerd[1483]: time="2024-11-12T20:53:37.705822330Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:37.711038 containerd[1483]: time="2024-11-12T20:53:37.710331201Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907181" Nov 12 20:53:37.731125 containerd[1483]: time="2024-11-12T20:53:37.731039710Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:37.737553 containerd[1483]: time="2024-11-12T20:53:37.737417532Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.811856175s" Nov 12 20:53:37.737553 containerd[1483]: time="2024-11-12T20:53:37.737532960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:53:37.744301 containerd[1483]: time="2024-11-12T20:53:37.743908630Z" level=info msg="CreateContainer within sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:53:37.746430 containerd[1483]: time="2024-11-12T20:53:37.746329244Z" level=info msg="shim disconnected" id=ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2 namespace=k8s.io Nov 12 20:53:37.746430 containerd[1483]: time="2024-11-12T20:53:37.746419242Z" level=warning msg="cleaning up after shim disconnected" id=ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2 namespace=k8s.io Nov 12 20:53:37.746430 containerd[1483]: time="2024-11-12T20:53:37.746433972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:53:37.780885 containerd[1483]: time="2024-11-12T20:53:37.780588634Z" level=info msg="CreateContainer within sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\"" Nov 12 20:53:37.782268 containerd[1483]: time="2024-11-12T20:53:37.782197142Z" level=info msg="StartContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\"" Nov 12 20:53:37.853761 systemd[1]: Started cri-containerd-eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef.scope - libcontainer container eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef. Nov 12 20:53:37.950066 containerd[1483]: time="2024-11-12T20:53:37.949958078Z" level=info msg="StartContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" returns successfully" Nov 12 20:53:38.242090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2-rootfs.mount: Deactivated successfully. Nov 12 20:53:38.287975 kubelet[2605]: E1112 20:53:38.287731 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:38.305599 kubelet[2605]: E1112 20:53:38.305173 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:38.314647 containerd[1483]: time="2024-11-12T20:53:38.314016910Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:53:38.390104 containerd[1483]: time="2024-11-12T20:53:38.390025144Z" level=info msg="CreateContainer within sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\"" Nov 12 20:53:38.397763 containerd[1483]: time="2024-11-12T20:53:38.394244359Z" level=info msg="StartContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\"" Nov 12 20:53:38.503614 systemd[1]: Started cri-containerd-9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25.scope - libcontainer container 9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25. Nov 12 20:53:38.637430 containerd[1483]: time="2024-11-12T20:53:38.637329973Z" level=info msg="StartContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" returns successfully" Nov 12 20:53:38.982416 kubelet[2605]: I1112 20:53:38.982344 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-7h6rf" podStartSLOduration=2.6188123450000003 podStartE2EDuration="20.982273475s" podCreationTimestamp="2024-11-12 20:53:18 +0000 UTC" firstStartedPulling="2024-11-12 20:53:19.374496197 +0000 UTC m=+13.895710761" lastFinishedPulling="2024-11-12 20:53:37.737957325 +0000 UTC m=+32.259171891" observedRunningTime="2024-11-12 20:53:38.465016223 +0000 UTC m=+32.986230802" watchObservedRunningTime="2024-11-12 20:53:38.982273475 +0000 UTC m=+33.503488042" Nov 12 20:53:39.107459 kubelet[2605]: I1112 20:53:39.107276 2605 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:53:39.332016 kubelet[2605]: E1112 20:53:39.330502 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:39.332016 kubelet[2605]: E1112 20:53:39.331729 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:39.356761 kubelet[2605]: I1112 20:53:39.356677 2605 topology_manager.go:215] "Topology Admit Handler" podUID="dac0423d-83e6-4f38-938b-bc2e7b59bc1b" podNamespace="kube-system" podName="coredns-76f75df574-9tjf4" Nov 12 20:53:39.380056 systemd[1]: Created slice kubepods-burstable-poddac0423d_83e6_4f38_938b_bc2e7b59bc1b.slice - libcontainer container kubepods-burstable-poddac0423d_83e6_4f38_938b_bc2e7b59bc1b.slice. Nov 12 20:53:39.407174 kubelet[2605]: I1112 20:53:39.407115 2605 topology_manager.go:215] "Topology Admit Handler" podUID="6eaf623d-5854-4444-a099-edbed5fa1433" podNamespace="kube-system" podName="coredns-76f75df574-hpst9" Nov 12 20:53:39.424331 systemd[1]: Created slice kubepods-burstable-pod6eaf623d_5854_4444_a099_edbed5fa1433.slice - libcontainer container kubepods-burstable-pod6eaf623d_5854_4444_a099_edbed5fa1433.slice. Nov 12 20:53:39.493977 kubelet[2605]: I1112 20:53:39.493611 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dac0423d-83e6-4f38-938b-bc2e7b59bc1b-config-volume\") pod \"coredns-76f75df574-9tjf4\" (UID: \"dac0423d-83e6-4f38-938b-bc2e7b59bc1b\") " pod="kube-system/coredns-76f75df574-9tjf4" Nov 12 20:53:39.493977 kubelet[2605]: I1112 20:53:39.493773 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhjv\" (UniqueName: \"kubernetes.io/projected/dac0423d-83e6-4f38-938b-bc2e7b59bc1b-kube-api-access-trhjv\") pod \"coredns-76f75df574-9tjf4\" (UID: \"dac0423d-83e6-4f38-938b-bc2e7b59bc1b\") " pod="kube-system/coredns-76f75df574-9tjf4" Nov 12 20:53:39.540115 kubelet[2605]: I1112 20:53:39.540058 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s7ccm" podStartSLOduration=6.783974702 podStartE2EDuration="21.539988178s" podCreationTimestamp="2024-11-12 20:53:18 +0000 UTC" firstStartedPulling="2024-11-12 20:53:19.156490955 +0000 UTC m=+13.677705512" lastFinishedPulling="2024-11-12 20:53:33.912504415 +0000 UTC m=+28.433718988" observedRunningTime="2024-11-12 20:53:39.537271457 +0000 UTC m=+34.058486032" watchObservedRunningTime="2024-11-12 20:53:39.539988178 +0000 UTC m=+34.061202751" Nov 12 20:53:39.594770 kubelet[2605]: I1112 20:53:39.594440 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cj52\" (UniqueName: \"kubernetes.io/projected/6eaf623d-5854-4444-a099-edbed5fa1433-kube-api-access-4cj52\") pod \"coredns-76f75df574-hpst9\" (UID: \"6eaf623d-5854-4444-a099-edbed5fa1433\") " pod="kube-system/coredns-76f75df574-hpst9" Nov 12 20:53:39.595858 kubelet[2605]: I1112 20:53:39.595822 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eaf623d-5854-4444-a099-edbed5fa1433-config-volume\") pod \"coredns-76f75df574-hpst9\" (UID: \"6eaf623d-5854-4444-a099-edbed5fa1433\") " pod="kube-system/coredns-76f75df574-hpst9" Nov 12 20:53:40.003186 kubelet[2605]: E1112 20:53:40.002489 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:40.004505 containerd[1483]: time="2024-11-12T20:53:40.004442030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9tjf4,Uid:dac0423d-83e6-4f38-938b-bc2e7b59bc1b,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:40.039775 kubelet[2605]: E1112 20:53:40.036807 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:40.040006 containerd[1483]: time="2024-11-12T20:53:40.037580201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpst9,Uid:6eaf623d-5854-4444-a099-edbed5fa1433,Namespace:kube-system,Attempt:0,}" Nov 12 20:53:40.335585 kubelet[2605]: E1112 20:53:40.335363 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:41.357407 kubelet[2605]: E1112 20:53:41.357363 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:43.452167 systemd-networkd[1375]: cilium_host: Link UP Nov 12 20:53:43.452412 systemd-networkd[1375]: cilium_net: Link UP Nov 12 20:53:43.452712 systemd-networkd[1375]: cilium_net: Gained carrier Nov 12 20:53:43.456707 systemd-networkd[1375]: cilium_host: Gained carrier Nov 12 20:53:43.741555 systemd-networkd[1375]: cilium_vxlan: Link UP Nov 12 20:53:43.741571 systemd-networkd[1375]: cilium_vxlan: Gained carrier Nov 12 20:53:44.232905 systemd-networkd[1375]: cilium_host: Gained IPv6LL Nov 12 20:53:44.421395 systemd-networkd[1375]: cilium_net: Gained IPv6LL Nov 12 20:53:44.505245 kernel: NET: Registered PF_ALG protocol family Nov 12 20:53:45.573994 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Nov 12 20:53:46.052459 systemd-networkd[1375]: lxc_health: Link UP Nov 12 20:53:46.070058 systemd-networkd[1375]: lxc_health: Gained carrier Nov 12 20:53:46.705123 systemd-networkd[1375]: lxc0cd913d9f9b4: Link UP Nov 12 20:53:46.711707 kernel: eth0: renamed from tmpc9012 Nov 12 20:53:46.718979 systemd-networkd[1375]: lxc0cd913d9f9b4: Gained carrier Nov 12 20:53:46.769773 systemd-networkd[1375]: lxc034ab2ee7ad0: Link UP Nov 12 20:53:46.775825 kernel: eth0: renamed from tmp2a1ad Nov 12 20:53:46.786290 systemd-networkd[1375]: lxc034ab2ee7ad0: Gained carrier Nov 12 20:53:46.860003 kubelet[2605]: E1112 20:53:46.857200 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:47.390882 kubelet[2605]: E1112 20:53:47.390824 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:47.620878 systemd-networkd[1375]: lxc_health: Gained IPv6LL Nov 12 20:53:47.749367 systemd-networkd[1375]: lxc0cd913d9f9b4: Gained IPv6LL Nov 12 20:53:48.391279 kubelet[2605]: E1112 20:53:48.391031 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:48.710133 systemd-networkd[1375]: lxc034ab2ee7ad0: Gained IPv6LL Nov 12 20:53:53.716852 containerd[1483]: time="2024-11-12T20:53:53.716479960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:53.716852 containerd[1483]: time="2024-11-12T20:53:53.716607211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:53.720231 containerd[1483]: time="2024-11-12T20:53:53.716686518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:53.720231 containerd[1483]: time="2024-11-12T20:53:53.716904981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:53.775928 systemd[1]: Started cri-containerd-c9012f4e4dadf139bac3d424f4b11449112506e111c66aa673d5ecbd50e9c3a5.scope - libcontainer container c9012f4e4dadf139bac3d424f4b11449112506e111c66aa673d5ecbd50e9c3a5. Nov 12 20:53:53.802831 containerd[1483]: time="2024-11-12T20:53:53.802148724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:53:53.802831 containerd[1483]: time="2024-11-12T20:53:53.802507047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:53:53.802831 containerd[1483]: time="2024-11-12T20:53:53.802608282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:53.803735 containerd[1483]: time="2024-11-12T20:53:53.803304638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:53:53.894565 systemd[1]: Started cri-containerd-2a1ad61487c8c8765a79975cd4126b9f3e4bb5b1850a0ae4b2e6aa6c88d67ec2.scope - libcontainer container 2a1ad61487c8c8765a79975cd4126b9f3e4bb5b1850a0ae4b2e6aa6c88d67ec2. Nov 12 20:53:53.937352 containerd[1483]: time="2024-11-12T20:53:53.937056787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hpst9,Uid:6eaf623d-5854-4444-a099-edbed5fa1433,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9012f4e4dadf139bac3d424f4b11449112506e111c66aa673d5ecbd50e9c3a5\"" Nov 12 20:53:53.943978 kubelet[2605]: E1112 20:53:53.943483 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:53.953123 containerd[1483]: time="2024-11-12T20:53:53.952520159Z" level=info msg="CreateContainer within sandbox \"c9012f4e4dadf139bac3d424f4b11449112506e111c66aa673d5ecbd50e9c3a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:53:53.987761 containerd[1483]: time="2024-11-12T20:53:53.987288822Z" level=info msg="CreateContainer within sandbox \"c9012f4e4dadf139bac3d424f4b11449112506e111c66aa673d5ecbd50e9c3a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c37715c3d98d8f9aedfa3107201c62ab67c5d9ef8a08a679612a136c343ed750\"" Nov 12 20:53:53.991666 containerd[1483]: time="2024-11-12T20:53:53.990697390Z" level=info msg="StartContainer for \"c37715c3d98d8f9aedfa3107201c62ab67c5d9ef8a08a679612a136c343ed750\"" Nov 12 20:53:54.060815 containerd[1483]: time="2024-11-12T20:53:54.060513281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9tjf4,Uid:dac0423d-83e6-4f38-938b-bc2e7b59bc1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a1ad61487c8c8765a79975cd4126b9f3e4bb5b1850a0ae4b2e6aa6c88d67ec2\"" Nov 12 20:53:54.064300 kubelet[2605]: E1112 20:53:54.064251 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:54.072841 containerd[1483]: time="2024-11-12T20:53:54.072700901Z" level=info msg="CreateContainer within sandbox \"2a1ad61487c8c8765a79975cd4126b9f3e4bb5b1850a0ae4b2e6aa6c88d67ec2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:53:54.087188 systemd[1]: Started cri-containerd-c37715c3d98d8f9aedfa3107201c62ab67c5d9ef8a08a679612a136c343ed750.scope - libcontainer container c37715c3d98d8f9aedfa3107201c62ab67c5d9ef8a08a679612a136c343ed750. Nov 12 20:53:54.110739 containerd[1483]: time="2024-11-12T20:53:54.110556792Z" level=info msg="CreateContainer within sandbox \"2a1ad61487c8c8765a79975cd4126b9f3e4bb5b1850a0ae4b2e6aa6c88d67ec2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ed8c0f96f24a208d1f7db8f028c92f5b24085614bac42729ec315b691445d3c\"" Nov 12 20:53:54.112775 containerd[1483]: time="2024-11-12T20:53:54.111609814Z" level=info msg="StartContainer for \"7ed8c0f96f24a208d1f7db8f028c92f5b24085614bac42729ec315b691445d3c\"" Nov 12 20:53:54.169125 systemd[1]: Started cri-containerd-7ed8c0f96f24a208d1f7db8f028c92f5b24085614bac42729ec315b691445d3c.scope - libcontainer container 7ed8c0f96f24a208d1f7db8f028c92f5b24085614bac42729ec315b691445d3c. Nov 12 20:53:54.205180 containerd[1483]: time="2024-11-12T20:53:54.205095956Z" level=info msg="StartContainer for \"c37715c3d98d8f9aedfa3107201c62ab67c5d9ef8a08a679612a136c343ed750\" returns successfully" Nov 12 20:53:54.246903 containerd[1483]: time="2024-11-12T20:53:54.246555180Z" level=info msg="StartContainer for \"7ed8c0f96f24a208d1f7db8f028c92f5b24085614bac42729ec315b691445d3c\" returns successfully" Nov 12 20:53:54.416286 kubelet[2605]: E1112 20:53:54.416093 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:54.422513 kubelet[2605]: E1112 20:53:54.422472 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:54.561394 kubelet[2605]: I1112 20:53:54.559265 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9tjf4" podStartSLOduration=36.559204131 podStartE2EDuration="36.559204131s" podCreationTimestamp="2024-11-12 20:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:54.555989345 +0000 UTC m=+49.077203914" watchObservedRunningTime="2024-11-12 20:53:54.559204131 +0000 UTC m=+49.080418702" Nov 12 20:53:54.561394 kubelet[2605]: I1112 20:53:54.559387 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hpst9" podStartSLOduration=36.559358998 podStartE2EDuration="36.559358998s" podCreationTimestamp="2024-11-12 20:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:53:54.485674876 +0000 UTC m=+49.006889438" watchObservedRunningTime="2024-11-12 20:53:54.559358998 +0000 UTC m=+49.080573579" Nov 12 20:53:55.426863 kubelet[2605]: E1112 20:53:55.425761 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:55.427931 kubelet[2605]: E1112 20:53:55.427801 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:55.950318 systemd[1]: Started sshd@7-137.184.81.153:22-139.178.68.195:53956.service - OpenSSH per-connection server daemon (139.178.68.195:53956). Nov 12 20:53:56.098961 sshd[3976]: Accepted publickey for core from 139.178.68.195 port 53956 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:53:56.103869 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:56.120368 systemd-logind[1453]: New session 8 of user core. Nov 12 20:53:56.128095 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:53:56.429646 kubelet[2605]: E1112 20:53:56.429590 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:56.430880 kubelet[2605]: E1112 20:53:56.429556 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:53:56.824170 sshd[3976]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:56.831820 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:53:56.835505 systemd[1]: sshd@7-137.184.81.153:22-139.178.68.195:53956.service: Deactivated successfully. Nov 12 20:53:56.841155 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:53:56.843389 systemd-logind[1453]: Removed session 8. Nov 12 20:54:01.889872 systemd[1]: Started sshd@8-137.184.81.153:22-139.178.68.195:53958.service - OpenSSH per-connection server daemon (139.178.68.195:53958). Nov 12 20:54:02.030971 sshd[3991]: Accepted publickey for core from 139.178.68.195 port 53958 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:02.034512 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:02.058459 systemd-logind[1453]: New session 9 of user core. Nov 12 20:54:02.066088 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:54:02.422929 sshd[3991]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:02.429039 systemd[1]: sshd@8-137.184.81.153:22-139.178.68.195:53958.service: Deactivated successfully. Nov 12 20:54:02.436163 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:54:02.441157 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:54:02.443817 systemd-logind[1453]: Removed session 9. Nov 12 20:54:07.447229 systemd[1]: Started sshd@9-137.184.81.153:22-139.178.68.195:36872.service - OpenSSH per-connection server daemon (139.178.68.195:36872). Nov 12 20:54:07.523650 sshd[4008]: Accepted publickey for core from 139.178.68.195 port 36872 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:07.529442 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:07.544777 systemd-logind[1453]: New session 10 of user core. Nov 12 20:54:07.550063 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:54:07.749320 sshd[4008]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:07.754734 systemd[1]: sshd@9-137.184.81.153:22-139.178.68.195:36872.service: Deactivated successfully. Nov 12 20:54:07.761547 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:54:07.766115 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:54:07.771043 systemd-logind[1453]: Removed session 10. Nov 12 20:54:11.876684 kubelet[2605]: E1112 20:54:11.875818 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:54:12.777378 systemd[1]: Started sshd@10-137.184.81.153:22-139.178.68.195:36886.service - OpenSSH per-connection server daemon (139.178.68.195:36886). Nov 12 20:54:12.882663 sshd[4023]: Accepted publickey for core from 139.178.68.195 port 36886 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:12.890448 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:12.915535 systemd-logind[1453]: New session 11 of user core. Nov 12 20:54:12.920779 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:54:13.286661 sshd[4023]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:13.307108 systemd[1]: sshd@10-137.184.81.153:22-139.178.68.195:36886.service: Deactivated successfully. Nov 12 20:54:13.316975 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:54:13.323823 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:54:13.337514 systemd[1]: Started sshd@11-137.184.81.153:22-139.178.68.195:36898.service - OpenSSH per-connection server daemon (139.178.68.195:36898). Nov 12 20:54:13.340979 systemd-logind[1453]: Removed session 11. Nov 12 20:54:13.480385 sshd[4037]: Accepted publickey for core from 139.178.68.195 port 36898 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:13.484300 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:13.498748 systemd-logind[1453]: New session 12 of user core. Nov 12 20:54:13.514955 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:54:13.966067 sshd[4037]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:13.987331 systemd[1]: sshd@11-137.184.81.153:22-139.178.68.195:36898.service: Deactivated successfully. Nov 12 20:54:13.997375 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:54:14.008973 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:54:14.027458 systemd[1]: Started sshd@12-137.184.81.153:22-139.178.68.195:36912.service - OpenSSH per-connection server daemon (139.178.68.195:36912). Nov 12 20:54:14.037536 systemd-logind[1453]: Removed session 12. Nov 12 20:54:14.135053 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 36912 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:14.134825 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:14.148966 systemd-logind[1453]: New session 13 of user core. Nov 12 20:54:14.154056 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:54:14.372403 sshd[4048]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:14.379194 systemd[1]: sshd@12-137.184.81.153:22-139.178.68.195:36912.service: Deactivated successfully. Nov 12 20:54:14.383086 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:54:14.386261 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:54:14.388360 systemd-logind[1453]: Removed session 13. Nov 12 20:54:19.405961 systemd[1]: Started sshd@13-137.184.81.153:22-139.178.68.195:60490.service - OpenSSH per-connection server daemon (139.178.68.195:60490). Nov 12 20:54:19.483854 sshd[4062]: Accepted publickey for core from 139.178.68.195 port 60490 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:19.487611 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:19.496759 systemd-logind[1453]: New session 14 of user core. Nov 12 20:54:19.504595 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:54:19.718046 sshd[4062]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:19.723148 systemd[1]: sshd@13-137.184.81.153:22-139.178.68.195:60490.service: Deactivated successfully. Nov 12 20:54:19.727211 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:54:19.731469 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:54:19.735918 systemd-logind[1453]: Removed session 14. Nov 12 20:54:24.760128 systemd[1]: Started sshd@14-137.184.81.153:22-139.178.68.195:60492.service - OpenSSH per-connection server daemon (139.178.68.195:60492). Nov 12 20:54:24.831399 sshd[4077]: Accepted publickey for core from 139.178.68.195 port 60492 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:24.834190 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:24.844987 systemd-logind[1453]: New session 15 of user core. Nov 12 20:54:24.848935 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:54:25.055453 sshd[4077]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:25.062152 systemd[1]: sshd@14-137.184.81.153:22-139.178.68.195:60492.service: Deactivated successfully. Nov 12 20:54:25.068995 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:54:25.071583 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:54:25.075253 systemd-logind[1453]: Removed session 15. Nov 12 20:54:27.140947 update_engine[1455]: I20241112 20:54:27.140437 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 12 20:54:27.140947 update_engine[1455]: I20241112 20:54:27.140561 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.148594 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.149845 1455 omaha_request_params.cc:62] Current group set to stable Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.150009 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.150017 1455 update_attempter.cc:643] Scheduling an action processor start. Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.150039 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.150878 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.151034 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.151046 1455 omaha_request_action.cc:272] Request: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: Nov 12 20:54:27.152586 update_engine[1455]: I20241112 20:54:27.151055 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:54:27.159615 update_engine[1455]: I20241112 20:54:27.156569 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:54:27.159615 update_engine[1455]: I20241112 20:54:27.157934 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:54:27.161898 update_engine[1455]: E20241112 20:54:27.161181 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:54:27.161898 update_engine[1455]: I20241112 20:54:27.161829 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 12 20:54:27.185917 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 12 20:54:30.088988 systemd[1]: Started sshd@15-137.184.81.153:22-139.178.68.195:44572.service - OpenSSH per-connection server daemon (139.178.68.195:44572). Nov 12 20:54:30.247377 sshd[4090]: Accepted publickey for core from 139.178.68.195 port 44572 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:30.252411 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:30.268550 systemd-logind[1453]: New session 16 of user core. Nov 12 20:54:30.280953 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:54:30.701025 sshd[4090]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:30.762719 systemd[1]: Started sshd@16-137.184.81.153:22-139.178.68.195:44576.service - OpenSSH per-connection server daemon (139.178.68.195:44576). Nov 12 20:54:30.764346 systemd[1]: sshd@15-137.184.81.153:22-139.178.68.195:44572.service: Deactivated successfully. Nov 12 20:54:30.772598 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:54:30.793750 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:54:30.802818 systemd-logind[1453]: Removed session 16. Nov 12 20:54:30.868197 sshd[4101]: Accepted publickey for core from 139.178.68.195 port 44576 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:30.875983 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:30.895378 systemd-logind[1453]: New session 17 of user core. Nov 12 20:54:30.913075 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:54:31.102606 kernel: hrtimer: interrupt took 1042846 ns Nov 12 20:54:31.532830 sshd[4101]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:31.553474 systemd[1]: sshd@16-137.184.81.153:22-139.178.68.195:44576.service: Deactivated successfully. Nov 12 20:54:31.560208 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:54:31.563989 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:54:31.593307 systemd[1]: Started sshd@17-137.184.81.153:22-139.178.68.195:44578.service - OpenSSH per-connection server daemon (139.178.68.195:44578). Nov 12 20:54:31.596838 systemd-logind[1453]: Removed session 17. Nov 12 20:54:31.790836 sshd[4114]: Accepted publickey for core from 139.178.68.195 port 44578 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:31.795348 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:31.805826 systemd-logind[1453]: New session 18 of user core. Nov 12 20:54:31.808952 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:54:32.872598 kubelet[2605]: E1112 20:54:32.872502 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:54:35.396934 sshd[4114]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:35.441437 systemd[1]: Started sshd@18-137.184.81.153:22-139.178.68.195:44592.service - OpenSSH per-connection server daemon (139.178.68.195:44592). Nov 12 20:54:35.454684 systemd[1]: sshd@17-137.184.81.153:22-139.178.68.195:44578.service: Deactivated successfully. Nov 12 20:54:35.459856 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:54:35.474227 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:54:35.481284 systemd-logind[1453]: Removed session 18. Nov 12 20:54:35.577781 sshd[4129]: Accepted publickey for core from 139.178.68.195 port 44592 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:35.578967 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:35.588325 systemd-logind[1453]: New session 19 of user core. Nov 12 20:54:35.594971 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:54:36.374008 sshd[4129]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:36.389608 systemd[1]: sshd@18-137.184.81.153:22-139.178.68.195:44592.service: Deactivated successfully. Nov 12 20:54:36.398022 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:54:36.403484 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:54:36.415266 systemd[1]: Started sshd@19-137.184.81.153:22-139.178.68.195:57506.service - OpenSSH per-connection server daemon (139.178.68.195:57506). Nov 12 20:54:36.424758 systemd-logind[1453]: Removed session 19. Nov 12 20:54:36.492845 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 57506 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:36.495559 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:36.505243 systemd-logind[1453]: New session 20 of user core. Nov 12 20:54:36.512009 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:54:36.729798 sshd[4143]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:36.743734 systemd[1]: sshd@19-137.184.81.153:22-139.178.68.195:57506.service: Deactivated successfully. Nov 12 20:54:36.750905 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:54:36.752820 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:54:36.755302 systemd-logind[1453]: Removed session 20. Nov 12 20:54:37.004514 update_engine[1455]: I20241112 20:54:37.004233 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:54:37.005138 update_engine[1455]: I20241112 20:54:37.004753 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:54:37.005262 update_engine[1455]: I20241112 20:54:37.005220 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:54:37.005875 update_engine[1455]: E20241112 20:54:37.005811 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:54:37.006204 update_engine[1455]: I20241112 20:54:37.005901 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 12 20:54:40.873304 kubelet[2605]: E1112 20:54:40.873246 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:54:41.768264 systemd[1]: Started sshd@20-137.184.81.153:22-139.178.68.195:57514.service - OpenSSH per-connection server daemon (139.178.68.195:57514). Nov 12 20:54:41.841443 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 57514 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:41.845421 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:41.858611 systemd-logind[1453]: New session 21 of user core. Nov 12 20:54:41.863826 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:54:42.116115 sshd[4158]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:42.126338 systemd[1]: sshd@20-137.184.81.153:22-139.178.68.195:57514.service: Deactivated successfully. Nov 12 20:54:42.133229 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:54:42.135528 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:54:42.138428 systemd-logind[1453]: Removed session 21. Nov 12 20:54:44.873981 kubelet[2605]: E1112 20:54:44.871848 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:54:47.008256 update_engine[1455]: I20241112 20:54:47.008070 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:54:47.008933 update_engine[1455]: I20241112 20:54:47.008538 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:54:47.010183 update_engine[1455]: I20241112 20:54:47.009935 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:54:47.010903 update_engine[1455]: E20241112 20:54:47.010751 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:54:47.011020 update_engine[1455]: I20241112 20:54:47.010946 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 12 20:54:47.144540 systemd[1]: Started sshd@21-137.184.81.153:22-139.178.68.195:42478.service - OpenSSH per-connection server daemon (139.178.68.195:42478). Nov 12 20:54:47.218370 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 42478 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:47.221223 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:47.241533 systemd-logind[1453]: New session 22 of user core. Nov 12 20:54:47.249526 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:54:47.504948 sshd[4171]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:47.516595 systemd[1]: sshd@21-137.184.81.153:22-139.178.68.195:42478.service: Deactivated successfully. Nov 12 20:54:47.519513 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:54:47.523611 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:54:47.526983 systemd-logind[1453]: Removed session 22. Nov 12 20:54:47.878405 kubelet[2605]: E1112 20:54:47.877313 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:54:52.554595 systemd[1]: Started sshd@22-137.184.81.153:22-139.178.68.195:42494.service - OpenSSH per-connection server daemon (139.178.68.195:42494). Nov 12 20:54:52.634407 sshd[4185]: Accepted publickey for core from 139.178.68.195 port 42494 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:52.638936 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:52.650947 systemd-logind[1453]: New session 23 of user core. Nov 12 20:54:52.667154 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:54:52.994094 sshd[4185]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:53.017885 systemd[1]: sshd@22-137.184.81.153:22-139.178.68.195:42494.service: Deactivated successfully. Nov 12 20:54:53.023156 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:54:53.031057 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:54:53.036140 systemd-logind[1453]: Removed session 23. Nov 12 20:54:57.006760 update_engine[1455]: I20241112 20:54:57.006569 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:54:57.007326 update_engine[1455]: I20241112 20:54:57.007036 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:54:57.007426 update_engine[1455]: I20241112 20:54:57.007383 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:54:57.008963 update_engine[1455]: E20241112 20:54:57.008847 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:54:57.009185 update_engine[1455]: I20241112 20:54:57.009052 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:54:57.009185 update_engine[1455]: I20241112 20:54:57.009073 1455 omaha_request_action.cc:617] Omaha request response: Nov 12 20:54:57.009262 update_engine[1455]: E20241112 20:54:57.009220 1455 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 12 20:54:57.014216 update_engine[1455]: I20241112 20:54:57.013762 1455 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 12 20:54:57.014216 update_engine[1455]: I20241112 20:54:57.013831 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:54:57.014216 update_engine[1455]: I20241112 20:54:57.013840 1455 update_attempter.cc:306] Processing Done. Nov 12 20:54:57.014216 update_engine[1455]: E20241112 20:54:57.013867 1455 update_attempter.cc:619] Update failed. Nov 12 20:54:57.022840 update_engine[1455]: I20241112 20:54:57.021801 1455 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023043 1455 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023108 1455 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023233 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023278 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023290 1455 omaha_request_action.cc:272] Request: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023303 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:54:57.023882 update_engine[1455]: I20241112 20:54:57.023602 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:54:57.025844 update_engine[1455]: I20241112 20:54:57.025525 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:54:57.026677 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 12 20:54:57.027666 update_engine[1455]: E20241112 20:54:57.027170 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027272 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027288 1455 omaha_request_action.cc:617] Omaha request response: Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027300 1455 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027310 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027319 1455 update_attempter.cc:306] Processing Done. Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027334 1455 update_attempter.cc:310] Error event sent. Nov 12 20:54:57.027666 update_engine[1455]: I20241112 20:54:57.027351 1455 update_check_scheduler.cc:74] Next update check in 46m21s Nov 12 20:54:57.029902 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 12 20:54:58.025927 systemd[1]: Started sshd@23-137.184.81.153:22-139.178.68.195:56326.service - OpenSSH per-connection server daemon (139.178.68.195:56326). Nov 12 20:54:58.131322 sshd[4198]: Accepted publickey for core from 139.178.68.195 port 56326 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:58.137990 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:58.152167 systemd-logind[1453]: New session 24 of user core. Nov 12 20:54:58.162152 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:54:58.376093 sshd[4198]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:58.384321 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:54:58.405074 systemd[1]: sshd@23-137.184.81.153:22-139.178.68.195:56326.service: Deactivated successfully. Nov 12 20:54:58.412112 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:54:58.423393 systemd[1]: Started sshd@24-137.184.81.153:22-139.178.68.195:56330.service - OpenSSH per-connection server daemon (139.178.68.195:56330). Nov 12 20:54:58.425520 systemd-logind[1453]: Removed session 24. Nov 12 20:54:58.509699 sshd[4211]: Accepted publickey for core from 139.178.68.195 port 56330 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:54:58.512276 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:58.522098 systemd-logind[1453]: New session 25 of user core. Nov 12 20:54:58.525972 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:55:00.502827 systemd[1]: run-containerd-runc-k8s.io-9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25-runc.iuegMw.mount: Deactivated successfully. Nov 12 20:55:00.564463 containerd[1483]: time="2024-11-12T20:55:00.564346456Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:55:00.734979 containerd[1483]: time="2024-11-12T20:55:00.734148039Z" level=info msg="StopContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" with timeout 30 (s)" Nov 12 20:55:00.734979 containerd[1483]: time="2024-11-12T20:55:00.734850173Z" level=info msg="StopContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" with timeout 2 (s)" Nov 12 20:55:00.739161 containerd[1483]: time="2024-11-12T20:55:00.739079686Z" level=info msg="Stop container \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" with signal terminated" Nov 12 20:55:00.739364 containerd[1483]: time="2024-11-12T20:55:00.739114676Z" level=info msg="Stop container \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" with signal terminated" Nov 12 20:55:00.766987 systemd-networkd[1375]: lxc_health: Link DOWN Nov 12 20:55:00.766998 systemd-networkd[1375]: lxc_health: Lost carrier Nov 12 20:55:00.789423 systemd[1]: cri-containerd-eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef.scope: Deactivated successfully. Nov 12 20:55:00.847128 systemd[1]: cri-containerd-9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25.scope: Deactivated successfully. Nov 12 20:55:00.848263 systemd[1]: cri-containerd-9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25.scope: Consumed 11.634s CPU time. Nov 12 20:55:00.890482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef-rootfs.mount: Deactivated successfully. Nov 12 20:55:00.919855 containerd[1483]: time="2024-11-12T20:55:00.919471878Z" level=info msg="shim disconnected" id=eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef namespace=k8s.io Nov 12 20:55:00.919855 containerd[1483]: time="2024-11-12T20:55:00.919576885Z" level=warning msg="cleaning up after shim disconnected" id=eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef namespace=k8s.io Nov 12 20:55:00.919855 containerd[1483]: time="2024-11-12T20:55:00.919592938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:00.945923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25-rootfs.mount: Deactivated successfully. Nov 12 20:55:00.948500 containerd[1483]: time="2024-11-12T20:55:00.948068070Z" level=info msg="shim disconnected" id=9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25 namespace=k8s.io Nov 12 20:55:00.948500 containerd[1483]: time="2024-11-12T20:55:00.948158142Z" level=warning msg="cleaning up after shim disconnected" id=9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25 namespace=k8s.io Nov 12 20:55:00.948500 containerd[1483]: time="2024-11-12T20:55:00.948171725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:00.993729 containerd[1483]: time="2024-11-12T20:55:00.992511699Z" level=info msg="StopContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" returns successfully" Nov 12 20:55:01.012822 containerd[1483]: time="2024-11-12T20:55:01.003015231Z" level=info msg="StopPodSandbox for \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\"" Nov 12 20:55:01.012822 containerd[1483]: time="2024-11-12T20:55:01.004395540Z" level=info msg="Container to stop \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.009060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0-shm.mount: Deactivated successfully. Nov 12 20:55:01.040315 systemd[1]: cri-containerd-33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0.scope: Deactivated successfully. Nov 12 20:55:01.049916 containerd[1483]: time="2024-11-12T20:55:01.048666069Z" level=info msg="StopContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" returns successfully" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.050253131Z" level=info msg="StopPodSandbox for \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\"" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.051799301Z" level=info msg="Container to stop \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.051829275Z" level=info msg="Container to stop \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.051845272Z" level=info msg="Container to stop \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.051863575Z" level=info msg="Container to stop \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.051923 containerd[1483]: time="2024-11-12T20:55:01.051884544Z" level=info msg="Container to stop \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:55:01.078647 systemd[1]: cri-containerd-da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd.scope: Deactivated successfully. Nov 12 20:55:01.156692 containerd[1483]: time="2024-11-12T20:55:01.156555666Z" level=info msg="shim disconnected" id=33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0 namespace=k8s.io Nov 12 20:55:01.157591 containerd[1483]: time="2024-11-12T20:55:01.157196780Z" level=warning msg="cleaning up after shim disconnected" id=33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0 namespace=k8s.io Nov 12 20:55:01.157591 containerd[1483]: time="2024-11-12T20:55:01.157289640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:01.161722 kubelet[2605]: E1112 20:55:01.161552 2605 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:55:01.203343 containerd[1483]: time="2024-11-12T20:55:01.201321758Z" level=info msg="shim disconnected" id=da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd namespace=k8s.io Nov 12 20:55:01.203343 containerd[1483]: time="2024-11-12T20:55:01.202512237Z" level=warning msg="cleaning up after shim disconnected" id=da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd namespace=k8s.io Nov 12 20:55:01.203343 containerd[1483]: time="2024-11-12T20:55:01.202805706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:01.209323 containerd[1483]: time="2024-11-12T20:55:01.208438006Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:55:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:55:01.290754 containerd[1483]: time="2024-11-12T20:55:01.290658482Z" level=info msg="TearDown network for sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" successfully" Nov 12 20:55:01.290754 containerd[1483]: time="2024-11-12T20:55:01.290747372Z" level=info msg="StopPodSandbox for \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" returns successfully" Nov 12 20:55:01.296605 containerd[1483]: time="2024-11-12T20:55:01.296227634Z" level=info msg="TearDown network for sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" successfully" Nov 12 20:55:01.297335 containerd[1483]: time="2024-11-12T20:55:01.297289400Z" level=info msg="StopPodSandbox for \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" returns successfully" Nov 12 20:55:01.379370 kubelet[2605]: I1112 20:55:01.374674 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388151 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-bpf-maps\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388259 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-hostproc\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388313 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsht2\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-kube-api-access-dsht2\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388356 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95897e02-b56c-4f89-a70a-22670429f747-clustermesh-secrets\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388391 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-net\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.389672 kubelet[2605]: I1112 20:55:01.388430 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-run\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388467 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-kernel\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388501 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-xtables-lock\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388535 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-cgroup\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388571 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-hubble-tls\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388606 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cni-path\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390201 kubelet[2605]: I1112 20:55:01.388655 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-lib-modules\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388690 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1abb8aca-ee80-49d4-bce6-f522ea5756c3-cilium-config-path\") pod \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\" (UID: \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\") " Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388733 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlhxw\" (UniqueName: \"kubernetes.io/projected/1abb8aca-ee80-49d4-bce6-f522ea5756c3-kube-api-access-hlhxw\") pod \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\" (UID: \"1abb8aca-ee80-49d4-bce6-f522ea5756c3\") " Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388768 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95897e02-b56c-4f89-a70a-22670429f747-cilium-config-path\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388801 2605 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-etc-cni-netd\") pod \"95897e02-b56c-4f89-a70a-22670429f747\" (UID: \"95897e02-b56c-4f89-a70a-22670429f747\") " Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388902 2605 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-bpf-maps\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.390458 kubelet[2605]: I1112 20:55:01.388970 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.392607 kubelet[2605]: I1112 20:55:01.389019 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-hostproc" (OuterVolumeSpecName: "hostproc") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406258 kubelet[2605]: I1112 20:55:01.403789 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406258 kubelet[2605]: I1112 20:55:01.403893 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406258 kubelet[2605]: I1112 20:55:01.404035 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406258 kubelet[2605]: I1112 20:55:01.404060 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406258 kubelet[2605]: I1112 20:55:01.404202 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406601 kubelet[2605]: I1112 20:55:01.404866 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cni-path" (OuterVolumeSpecName: "cni-path") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.406601 kubelet[2605]: I1112 20:55:01.405016 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:55:01.441143 kubelet[2605]: I1112 20:55:01.432301 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-kube-api-access-dsht2" (OuterVolumeSpecName: "kube-api-access-dsht2") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "kube-api-access-dsht2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:01.441143 kubelet[2605]: I1112 20:55:01.432647 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95897e02-b56c-4f89-a70a-22670429f747-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:55:01.441143 kubelet[2605]: I1112 20:55:01.434796 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abb8aca-ee80-49d4-bce6-f522ea5756c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1abb8aca-ee80-49d4-bce6-f522ea5756c3" (UID: "1abb8aca-ee80-49d4-bce6-f522ea5756c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:55:01.447605 kubelet[2605]: I1112 20:55:01.445684 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abb8aca-ee80-49d4-bce6-f522ea5756c3-kube-api-access-hlhxw" (OuterVolumeSpecName: "kube-api-access-hlhxw") pod "1abb8aca-ee80-49d4-bce6-f522ea5756c3" (UID: "1abb8aca-ee80-49d4-bce6-f522ea5756c3"). InnerVolumeSpecName "kube-api-access-hlhxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:01.447605 kubelet[2605]: I1112 20:55:01.445967 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:55:01.447605 kubelet[2605]: I1112 20:55:01.446020 2605 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95897e02-b56c-4f89-a70a-22670429f747-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "95897e02-b56c-4f89-a70a-22670429f747" (UID: "95897e02-b56c-4f89-a70a-22670429f747"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:55:01.487331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0-rootfs.mount: Deactivated successfully. Nov 12 20:55:01.487530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd-rootfs.mount: Deactivated successfully. Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490104 2605 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95897e02-b56c-4f89-a70a-22670429f747-cilium-config-path\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490148 2605 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-etc-cni-netd\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490171 2605 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-hostproc\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490436 2605 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dsht2\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-kube-api-access-dsht2\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490456 2605 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/95897e02-b56c-4f89-a70a-22670429f747-clustermesh-secrets\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490474 2605 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-net\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490493 2605 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-run\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496481 kubelet[2605]: I1112 20:55:01.490511 2605 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-host-proc-sys-kernel\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.487707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd-shm.mount: Deactivated successfully. Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490530 2605 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-xtables-lock\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490554 2605 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cilium-cgroup\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490573 2605 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/95897e02-b56c-4f89-a70a-22670429f747-hubble-tls\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490588 2605 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-cni-path\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490606 2605 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95897e02-b56c-4f89-a70a-22670429f747-lib-modules\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490640 2605 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1abb8aca-ee80-49d4-bce6-f522ea5756c3-cilium-config-path\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.496980 kubelet[2605]: I1112 20:55:01.490655 2605 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hlhxw\" (UniqueName: \"kubernetes.io/projected/1abb8aca-ee80-49d4-bce6-f522ea5756c3-kube-api-access-hlhxw\") on node \"ci-4081.2.0-a-ee124ee133\" DevicePath \"\"" Nov 12 20:55:01.487844 systemd[1]: var-lib-kubelet-pods-1abb8aca\x2dee80\x2d49d4\x2dbce6\x2df522ea5756c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlhxw.mount: Deactivated successfully. Nov 12 20:55:01.487933 systemd[1]: var-lib-kubelet-pods-95897e02\x2db56c\x2d4f89\x2da70a\x2d22670429f747-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsht2.mount: Deactivated successfully. Nov 12 20:55:01.488169 systemd[1]: var-lib-kubelet-pods-95897e02\x2db56c\x2d4f89\x2da70a\x2d22670429f747-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:55:01.488263 systemd[1]: var-lib-kubelet-pods-95897e02\x2db56c\x2d4f89\x2da70a\x2d22670429f747-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:55:01.774708 kubelet[2605]: I1112 20:55:01.774668 2605 scope.go:117] "RemoveContainer" containerID="eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef" Nov 12 20:55:01.815663 systemd[1]: Removed slice kubepods-besteffort-pod1abb8aca_ee80_49d4_bce6_f522ea5756c3.slice - libcontainer container kubepods-besteffort-pod1abb8aca_ee80_49d4_bce6_f522ea5756c3.slice. Nov 12 20:55:01.862040 containerd[1483]: time="2024-11-12T20:55:01.861975602Z" level=info msg="RemoveContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\"" Nov 12 20:55:01.866646 systemd[1]: Removed slice kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice - libcontainer container kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice. Nov 12 20:55:01.867052 systemd[1]: kubepods-burstable-pod95897e02_b56c_4f89_a70a_22670429f747.slice: Consumed 11.770s CPU time. Nov 12 20:55:01.913231 containerd[1483]: time="2024-11-12T20:55:01.905472747Z" level=info msg="RemoveContainer for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" returns successfully" Nov 12 20:55:01.941230 kubelet[2605]: I1112 20:55:01.940723 2605 scope.go:117] "RemoveContainer" containerID="eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef" Nov 12 20:55:01.964377 containerd[1483]: time="2024-11-12T20:55:01.946450133Z" level=error msg="ContainerStatus for \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\": not found" Nov 12 20:55:02.017213 kubelet[2605]: E1112 20:55:02.001813 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\": not found" containerID="eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef" Nov 12 20:55:02.054800 kubelet[2605]: I1112 20:55:02.053847 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef"} err="failed to get container status \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\": rpc error: code = NotFound desc = an error occurred when try to find container \"eabb459b9bbe640917742bbaf0c3fbdb31d1a723bcb7f5d6c568204becc3abef\": not found" Nov 12 20:55:02.054800 kubelet[2605]: I1112 20:55:02.053928 2605 scope.go:117] "RemoveContainer" containerID="9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25" Nov 12 20:55:02.069226 containerd[1483]: time="2024-11-12T20:55:02.058826731Z" level=info msg="RemoveContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\"" Nov 12 20:55:02.086080 containerd[1483]: time="2024-11-12T20:55:02.085988886Z" level=info msg="RemoveContainer for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" returns successfully" Nov 12 20:55:02.093897 kubelet[2605]: I1112 20:55:02.093271 2605 scope.go:117] "RemoveContainer" containerID="ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2" Nov 12 20:55:02.101672 containerd[1483]: time="2024-11-12T20:55:02.099544629Z" level=info msg="RemoveContainer for \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\"" Nov 12 20:55:02.112283 containerd[1483]: time="2024-11-12T20:55:02.112049320Z" level=info msg="RemoveContainer for \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\" returns successfully" Nov 12 20:55:02.115837 kubelet[2605]: I1112 20:55:02.114347 2605 scope.go:117] "RemoveContainer" containerID="015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c" Nov 12 20:55:02.121166 containerd[1483]: time="2024-11-12T20:55:02.120834941Z" level=info msg="RemoveContainer for \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\"" Nov 12 20:55:02.140977 containerd[1483]: time="2024-11-12T20:55:02.140668008Z" level=info msg="RemoveContainer for \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\" returns successfully" Nov 12 20:55:02.146189 kubelet[2605]: I1112 20:55:02.145480 2605 scope.go:117] "RemoveContainer" containerID="b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228" Nov 12 20:55:02.156786 containerd[1483]: time="2024-11-12T20:55:02.156273177Z" level=info msg="RemoveContainer for \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\"" Nov 12 20:55:02.182590 containerd[1483]: time="2024-11-12T20:55:02.180759063Z" level=info msg="RemoveContainer for \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\" returns successfully" Nov 12 20:55:02.186163 kubelet[2605]: I1112 20:55:02.185805 2605 scope.go:117] "RemoveContainer" containerID="239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3" Nov 12 20:55:02.195934 containerd[1483]: time="2024-11-12T20:55:02.195413785Z" level=info msg="RemoveContainer for \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\"" Nov 12 20:55:02.197044 sshd[4211]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:02.230651 systemd[1]: Started sshd@25-137.184.81.153:22-139.178.68.195:56338.service - OpenSSH per-connection server daemon (139.178.68.195:56338). Nov 12 20:55:02.235508 systemd[1]: sshd@24-137.184.81.153:22-139.178.68.195:56330.service: Deactivated successfully. Nov 12 20:55:02.250250 containerd[1483]: time="2024-11-12T20:55:02.250125658Z" level=info msg="RemoveContainer for \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\" returns successfully" Nov 12 20:55:02.251230 kubelet[2605]: I1112 20:55:02.251196 2605 scope.go:117] "RemoveContainer" containerID="9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25" Nov 12 20:55:02.252492 containerd[1483]: time="2024-11-12T20:55:02.252360652Z" level=error msg="ContainerStatus for \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\": not found" Nov 12 20:55:02.254475 kubelet[2605]: E1112 20:55:02.253778 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\": not found" containerID="9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25" Nov 12 20:55:02.254475 kubelet[2605]: I1112 20:55:02.253847 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25"} err="failed to get container status \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dfbf4e93ab80e1058feb4b739dfd6e312dbb35221f550af8c8ce28ea7dc7d25\": not found" Nov 12 20:55:02.254475 kubelet[2605]: I1112 20:55:02.253870 2605 scope.go:117] "RemoveContainer" containerID="ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2" Nov 12 20:55:02.257099 containerd[1483]: time="2024-11-12T20:55:02.256308573Z" level=error msg="ContainerStatus for \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\": not found" Nov 12 20:55:02.257517 kubelet[2605]: E1112 20:55:02.257063 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\": not found" containerID="ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2" Nov 12 20:55:02.258454 kubelet[2605]: I1112 20:55:02.257704 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2"} err="failed to get container status \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef92a3be0d595cdff027909604d1118dce7dbfbbd326af94c18b0b75406304b2\": not found" Nov 12 20:55:02.258454 kubelet[2605]: I1112 20:55:02.257747 2605 scope.go:117] "RemoveContainer" containerID="015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c" Nov 12 20:55:02.258560 containerd[1483]: time="2024-11-12T20:55:02.258172700Z" level=error msg="ContainerStatus for \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\": not found" Nov 12 20:55:02.259035 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:55:02.263634 kubelet[2605]: E1112 20:55:02.259935 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\": not found" containerID="015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c" Nov 12 20:55:02.263634 kubelet[2605]: I1112 20:55:02.259994 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c"} err="failed to get container status \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\": rpc error: code = NotFound desc = an error occurred when try to find container \"015c6294bf1e826d2c902f9eeb3c40c1924063ffed1c3e30066e43e3010b003c\": not found" Nov 12 20:55:02.263634 kubelet[2605]: I1112 20:55:02.260014 2605 scope.go:117] "RemoveContainer" containerID="b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228" Nov 12 20:55:02.266079 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:55:02.287157 containerd[1483]: time="2024-11-12T20:55:02.287035309Z" level=error msg="ContainerStatus for \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\": not found" Nov 12 20:55:02.287695 systemd-logind[1453]: Removed session 25. Nov 12 20:55:02.295834 kubelet[2605]: E1112 20:55:02.292675 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\": not found" containerID="b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228" Nov 12 20:55:02.295834 kubelet[2605]: I1112 20:55:02.292760 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228"} err="failed to get container status \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9e17afc632b20e9447b6f07ca58484892fe7064fbccfaf26461b55e1249b228\": not found" Nov 12 20:55:02.295834 kubelet[2605]: I1112 20:55:02.292784 2605 scope.go:117] "RemoveContainer" containerID="239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3" Nov 12 20:55:02.296341 containerd[1483]: time="2024-11-12T20:55:02.295511022Z" level=error msg="ContainerStatus for \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\": not found" Nov 12 20:55:02.296414 kubelet[2605]: E1112 20:55:02.295965 2605 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\": not found" containerID="239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3" Nov 12 20:55:02.296414 kubelet[2605]: I1112 20:55:02.296032 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3"} err="failed to get container status \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"239cc367c9d78d3954c2064cf6367bacbf014f167b002a966d6d133a786544a3\": not found" Nov 12 20:55:02.502254 sshd[4374]: Accepted publickey for core from 139.178.68.195 port 56338 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:55:02.500569 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:02.522904 systemd-logind[1453]: New session 26 of user core. Nov 12 20:55:02.542319 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:55:03.880438 kubelet[2605]: I1112 20:55:03.878483 2605 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1abb8aca-ee80-49d4-bce6-f522ea5756c3" path="/var/lib/kubelet/pods/1abb8aca-ee80-49d4-bce6-f522ea5756c3/volumes" Nov 12 20:55:03.880438 kubelet[2605]: I1112 20:55:03.879270 2605 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="95897e02-b56c-4f89-a70a-22670429f747" path="/var/lib/kubelet/pods/95897e02-b56c-4f89-a70a-22670429f747/volumes" Nov 12 20:55:04.378273 sshd[4374]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:04.393542 systemd[1]: sshd@25-137.184.81.153:22-139.178.68.195:56338.service: Deactivated successfully. Nov 12 20:55:04.405260 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:55:04.412658 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:55:04.415329 systemd-logind[1453]: Removed session 26. Nov 12 20:55:04.424404 systemd[1]: Started sshd@26-137.184.81.153:22-139.178.68.195:56342.service - OpenSSH per-connection server daemon (139.178.68.195:56342). Nov 12 20:55:04.492651 kubelet[2605]: I1112 20:55:04.491383 2605 topology_manager.go:215] "Topology Admit Handler" podUID="ef03f405-6f7c-4d76-9b94-0d85e12bed7c" podNamespace="kube-system" podName="cilium-gfcpx" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498655 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="mount-cgroup" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498741 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="mount-bpf-fs" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498756 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1abb8aca-ee80-49d4-bce6-f522ea5756c3" containerName="cilium-operator" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498771 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="apply-sysctl-overwrites" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498783 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="clean-cilium-state" Nov 12 20:55:04.499037 kubelet[2605]: E1112 20:55:04.498813 2605 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="cilium-agent" Nov 12 20:55:04.499037 kubelet[2605]: I1112 20:55:04.498864 2605 memory_manager.go:354] "RemoveStaleState removing state" podUID="95897e02-b56c-4f89-a70a-22670429f747" containerName="cilium-agent" Nov 12 20:55:04.499037 kubelet[2605]: I1112 20:55:04.498896 2605 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abb8aca-ee80-49d4-bce6-f522ea5756c3" containerName="cilium-operator" Nov 12 20:55:04.594216 systemd[1]: Created slice kubepods-burstable-podef03f405_6f7c_4d76_9b94_0d85e12bed7c.slice - libcontainer container kubepods-burstable-podef03f405_6f7c_4d76_9b94_0d85e12bed7c.slice. Nov 12 20:55:04.615861 sshd[4387]: Accepted publickey for core from 139.178.68.195 port 56342 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:55:04.626429 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:04.657042 systemd-logind[1453]: New session 27 of user core. Nov 12 20:55:04.671008 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:55:04.691765 kubelet[2605]: I1112 20:55:04.691696 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-cni-path\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707148 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-lib-modules\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707299 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-clustermesh-secrets\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707458 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-cilium-cgroup\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707508 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4xx5\" (UniqueName: \"kubernetes.io/projected/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-kube-api-access-k4xx5\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707569 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-hostproc\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.708571 kubelet[2605]: I1112 20:55:04.707677 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-hubble-tls\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709086 kubelet[2605]: I1112 20:55:04.707742 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-cilium-ipsec-secrets\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709086 kubelet[2605]: I1112 20:55:04.707808 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-host-proc-sys-kernel\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709086 kubelet[2605]: I1112 20:55:04.707848 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-cilium-run\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709086 kubelet[2605]: I1112 20:55:04.707899 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-xtables-lock\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709086 kubelet[2605]: I1112 20:55:04.707948 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-cilium-config-path\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709268 kubelet[2605]: I1112 20:55:04.707982 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-host-proc-sys-net\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709268 kubelet[2605]: I1112 20:55:04.708361 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-bpf-maps\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.709268 kubelet[2605]: I1112 20:55:04.708442 2605 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef03f405-6f7c-4d76-9b94-0d85e12bed7c-etc-cni-netd\") pod \"cilium-gfcpx\" (UID: \"ef03f405-6f7c-4d76-9b94-0d85e12bed7c\") " pod="kube-system/cilium-gfcpx" Nov 12 20:55:04.837172 sshd[4387]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:04.953391 systemd[1]: Started sshd@27-137.184.81.153:22-139.178.68.195:56346.service - OpenSSH per-connection server daemon (139.178.68.195:56346). Nov 12 20:55:04.959608 systemd[1]: sshd@26-137.184.81.153:22-139.178.68.195:56342.service: Deactivated successfully. Nov 12 20:55:05.061313 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:55:05.122820 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:55:05.133597 systemd-logind[1453]: Removed session 27. Nov 12 20:55:05.208960 sshd[4394]: Accepted publickey for core from 139.178.68.195 port 56346 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:55:05.213406 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:05.229534 kubelet[2605]: E1112 20:55:05.228281 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:05.231084 containerd[1483]: time="2024-11-12T20:55:05.230401520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfcpx,Uid:ef03f405-6f7c-4d76-9b94-0d85e12bed7c,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:05.241118 systemd-logind[1453]: New session 28 of user core. Nov 12 20:55:05.252805 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:55:05.369048 containerd[1483]: time="2024-11-12T20:55:05.368716357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:05.372647 containerd[1483]: time="2024-11-12T20:55:05.372093773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:05.372647 containerd[1483]: time="2024-11-12T20:55:05.372136169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:05.372647 containerd[1483]: time="2024-11-12T20:55:05.372289485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:05.452905 systemd[1]: Started cri-containerd-cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535.scope - libcontainer container cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535. Nov 12 20:55:05.544911 containerd[1483]: time="2024-11-12T20:55:05.543513572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfcpx,Uid:ef03f405-6f7c-4d76-9b94-0d85e12bed7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\"" Nov 12 20:55:05.548433 kubelet[2605]: E1112 20:55:05.547558 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:05.564368 containerd[1483]: time="2024-11-12T20:55:05.564156122Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:55:05.690160 containerd[1483]: time="2024-11-12T20:55:05.690030686Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694\"" Nov 12 20:55:05.696515 containerd[1483]: time="2024-11-12T20:55:05.693457853Z" level=info msg="StartContainer for \"9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694\"" Nov 12 20:55:05.772372 systemd[1]: Started cri-containerd-9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694.scope - libcontainer container 9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694. Nov 12 20:55:05.773531 containerd[1483]: time="2024-11-12T20:55:05.773476084Z" level=info msg="StopPodSandbox for \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\"" Nov 12 20:55:05.774569 containerd[1483]: time="2024-11-12T20:55:05.774234342Z" level=info msg="TearDown network for sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" successfully" Nov 12 20:55:05.776761 containerd[1483]: time="2024-11-12T20:55:05.774998186Z" level=info msg="StopPodSandbox for \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" returns successfully" Nov 12 20:55:05.778063 containerd[1483]: time="2024-11-12T20:55:05.777998986Z" level=info msg="RemovePodSandbox for \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\"" Nov 12 20:55:05.783189 containerd[1483]: time="2024-11-12T20:55:05.782974550Z" level=info msg="Forcibly stopping sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\"" Nov 12 20:55:05.783189 containerd[1483]: time="2024-11-12T20:55:05.783128764Z" level=info msg="TearDown network for sandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" successfully" Nov 12 20:55:05.795914 containerd[1483]: time="2024-11-12T20:55:05.795380960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:05.795914 containerd[1483]: time="2024-11-12T20:55:05.795504669Z" level=info msg="RemovePodSandbox \"33713ae2a4dc5cd67c49e0a27ea0e680c79cef9fbfd9313b16c65db13edabfb0\" returns successfully" Nov 12 20:55:05.806283 containerd[1483]: time="2024-11-12T20:55:05.806209229Z" level=info msg="StopPodSandbox for \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\"" Nov 12 20:55:05.806774 containerd[1483]: time="2024-11-12T20:55:05.806598242Z" level=info msg="TearDown network for sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" successfully" Nov 12 20:55:05.806774 containerd[1483]: time="2024-11-12T20:55:05.806653273Z" level=info msg="StopPodSandbox for \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" returns successfully" Nov 12 20:55:05.810783 containerd[1483]: time="2024-11-12T20:55:05.810082513Z" level=info msg="RemovePodSandbox for \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\"" Nov 12 20:55:05.810783 containerd[1483]: time="2024-11-12T20:55:05.810146988Z" level=info msg="Forcibly stopping sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\"" Nov 12 20:55:05.810783 containerd[1483]: time="2024-11-12T20:55:05.810433156Z" level=info msg="TearDown network for sandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" successfully" Nov 12 20:55:05.823077 containerd[1483]: time="2024-11-12T20:55:05.823002897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:05.824390 containerd[1483]: time="2024-11-12T20:55:05.824175360Z" level=info msg="RemovePodSandbox \"da9e7a0da378f59f3c143b7442ea03bae5f2bff14c0ccc401419f210ef8da8cd\" returns successfully" Nov 12 20:55:05.913027 containerd[1483]: time="2024-11-12T20:55:05.912938180Z" level=info msg="StartContainer for \"9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694\" returns successfully" Nov 12 20:55:05.940302 systemd[1]: cri-containerd-9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694.scope: Deactivated successfully. Nov 12 20:55:06.056905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694-rootfs.mount: Deactivated successfully. Nov 12 20:55:06.087865 containerd[1483]: time="2024-11-12T20:55:06.086824605Z" level=info msg="shim disconnected" id=9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694 namespace=k8s.io Nov 12 20:55:06.087865 containerd[1483]: time="2024-11-12T20:55:06.086912127Z" level=warning msg="cleaning up after shim disconnected" id=9d49c0cd922cde5caf2636cf585eea72e57a27c471bed148d4dcacb67f31b694 namespace=k8s.io Nov 12 20:55:06.087865 containerd[1483]: time="2024-11-12T20:55:06.086925291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:06.130507 kubelet[2605]: E1112 20:55:06.130464 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:06.164140 kubelet[2605]: E1112 20:55:06.164048 2605 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:55:07.140486 kubelet[2605]: E1112 20:55:07.140035 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:07.171568 containerd[1483]: time="2024-11-12T20:55:07.171168289Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:55:07.221852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958034391.mount: Deactivated successfully. Nov 12 20:55:07.226206 containerd[1483]: time="2024-11-12T20:55:07.222892063Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e\"" Nov 12 20:55:07.281695 containerd[1483]: time="2024-11-12T20:55:07.280406247Z" level=info msg="StartContainer for \"996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e\"" Nov 12 20:55:07.418647 systemd[1]: Started cri-containerd-996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e.scope - libcontainer container 996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e. Nov 12 20:55:07.593024 containerd[1483]: time="2024-11-12T20:55:07.591505469Z" level=info msg="StartContainer for \"996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e\" returns successfully" Nov 12 20:55:07.653572 systemd[1]: cri-containerd-996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e.scope: Deactivated successfully. Nov 12 20:55:07.753924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e-rootfs.mount: Deactivated successfully. Nov 12 20:55:07.769002 containerd[1483]: time="2024-11-12T20:55:07.768124266Z" level=info msg="shim disconnected" id=996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e namespace=k8s.io Nov 12 20:55:07.769002 containerd[1483]: time="2024-11-12T20:55:07.768222318Z" level=warning msg="cleaning up after shim disconnected" id=996242bf8ad6ab7c1f747bfdae2c122e7a2f6263f64c7c888bdcd1f7a6c4ea3e namespace=k8s.io Nov 12 20:55:07.769002 containerd[1483]: time="2024-11-12T20:55:07.768236066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:08.146526 kubelet[2605]: E1112 20:55:08.145930 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:08.172222 containerd[1483]: time="2024-11-12T20:55:08.170909874Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:55:08.251529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899605375.mount: Deactivated successfully. Nov 12 20:55:08.264886 containerd[1483]: time="2024-11-12T20:55:08.264598968Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3\"" Nov 12 20:55:08.282043 containerd[1483]: time="2024-11-12T20:55:08.279790139Z" level=info msg="StartContainer for \"b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3\"" Nov 12 20:55:08.410033 systemd[1]: Started cri-containerd-b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3.scope - libcontainer container b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3. Nov 12 20:55:08.508476 containerd[1483]: time="2024-11-12T20:55:08.507807921Z" level=info msg="StartContainer for \"b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3\" returns successfully" Nov 12 20:55:08.517599 systemd[1]: cri-containerd-b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3.scope: Deactivated successfully. Nov 12 20:55:08.610243 containerd[1483]: time="2024-11-12T20:55:08.609834042Z" level=info msg="shim disconnected" id=b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3 namespace=k8s.io Nov 12 20:55:08.610243 containerd[1483]: time="2024-11-12T20:55:08.609934801Z" level=warning msg="cleaning up after shim disconnected" id=b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3 namespace=k8s.io Nov 12 20:55:08.610243 containerd[1483]: time="2024-11-12T20:55:08.609944703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:08.901636 kubelet[2605]: I1112 20:55:08.898277 2605 setters.go:568] "Node became not ready" node="ci-4081.2.0-a-ee124ee133" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T20:55:08Z","lastTransitionTime":"2024-11-12T20:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 20:55:09.156003 kubelet[2605]: E1112 20:55:09.155477 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:09.166815 containerd[1483]: time="2024-11-12T20:55:09.165809690Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:55:09.202319 containerd[1483]: time="2024-11-12T20:55:09.200528747Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff\"" Nov 12 20:55:09.211200 containerd[1483]: time="2024-11-12T20:55:09.209470277Z" level=info msg="StartContainer for \"f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff\"" Nov 12 20:55:09.229452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39210e5ed963c691284edfe42775b774c56fc13bccd5160fc6673703a5d71b3-rootfs.mount: Deactivated successfully. Nov 12 20:55:09.346595 systemd[1]: Started cri-containerd-f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff.scope - libcontainer container f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff. Nov 12 20:55:09.439211 containerd[1483]: time="2024-11-12T20:55:09.438133899Z" level=info msg="StartContainer for \"f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff\" returns successfully" Nov 12 20:55:09.453316 systemd[1]: cri-containerd-f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff.scope: Deactivated successfully. Nov 12 20:55:09.541779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff-rootfs.mount: Deactivated successfully. Nov 12 20:55:09.560672 containerd[1483]: time="2024-11-12T20:55:09.560235957Z" level=info msg="shim disconnected" id=f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff namespace=k8s.io Nov 12 20:55:09.560672 containerd[1483]: time="2024-11-12T20:55:09.560326928Z" level=warning msg="cleaning up after shim disconnected" id=f0371ee90fad0f236b2ccfe78f464ee1cae3bead7fdd017ae8202277678557ff namespace=k8s.io Nov 12 20:55:09.560672 containerd[1483]: time="2024-11-12T20:55:09.560345497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:09.602398 containerd[1483]: time="2024-11-12T20:55:09.602302465Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:55:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:55:10.197988 kubelet[2605]: E1112 20:55:10.195232 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:10.204256 containerd[1483]: time="2024-11-12T20:55:10.204178250Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:55:10.251923 containerd[1483]: time="2024-11-12T20:55:10.251756644Z" level=info msg="CreateContainer within sandbox \"cfb2494cadd4eda8ce6534f31c3b7ca14ea44b25acae6b91961588b759269535\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc\"" Nov 12 20:55:10.253673 containerd[1483]: time="2024-11-12T20:55:10.253271482Z" level=info msg="StartContainer for \"e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc\"" Nov 12 20:55:10.372841 systemd[1]: Started cri-containerd-e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc.scope - libcontainer container e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc. Nov 12 20:55:10.514871 containerd[1483]: time="2024-11-12T20:55:10.514703406Z" level=info msg="StartContainer for \"e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc\" returns successfully" Nov 12 20:55:11.230896 kubelet[2605]: E1112 20:55:11.221290 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:11.744732 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:55:12.219115 kubelet[2605]: E1112 20:55:12.219053 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:12.704364 systemd[1]: run-containerd-runc-k8s.io-e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc-runc.K67ohA.mount: Deactivated successfully. Nov 12 20:55:18.792313 systemd-networkd[1375]: lxc_health: Link UP Nov 12 20:55:18.817760 systemd-networkd[1375]: lxc_health: Gained carrier Nov 12 20:55:18.873483 kubelet[2605]: E1112 20:55:18.872774 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:19.247670 kubelet[2605]: E1112 20:55:19.247286 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:19.308259 kubelet[2605]: E1112 20:55:19.287591 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:19.308259 kubelet[2605]: I1112 20:55:19.307764 2605 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gfcpx" podStartSLOduration=15.307693959 podStartE2EDuration="15.307693959s" podCreationTimestamp="2024-11-12 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:11.328417352 +0000 UTC m=+125.849631990" watchObservedRunningTime="2024-11-12 20:55:19.307693959 +0000 UTC m=+133.828908531" Nov 12 20:55:19.914119 systemd-networkd[1375]: lxc_health: Gained IPv6LL Nov 12 20:55:20.872428 kubelet[2605]: E1112 20:55:20.871687 2605 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 12 20:55:22.500466 systemd[1]: run-containerd-runc-k8s.io-e9e0ff97235598d66e42771726a3312083b08af54771dfd52412878da76f50bc-runc.7iieV6.mount: Deactivated successfully. Nov 12 20:55:24.986031 sshd[4394]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:24.995397 systemd[1]: sshd@27-137.184.81.153:22-139.178.68.195:56346.service: Deactivated successfully. Nov 12 20:55:25.004287 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:55:25.005729 systemd-logind[1453]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:55:25.009699 systemd-logind[1453]: Removed session 28.