Jan 16 08:57:08.999746 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 16 08:57:08.999792 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:08.999814 kernel: BIOS-provided physical RAM map: Jan 16 08:57:08.999826 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 08:57:08.999834 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 08:57:08.999844 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 08:57:08.999858 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 16 08:57:08.999872 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 16 08:57:08.999883 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 08:57:08.999900 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 08:57:08.999912 kernel: NX (Execute Disable) protection: active Jan 16 08:57:08.999919 kernel: APIC: Static calls initialized Jan 16 08:57:08.999962 kernel: SMBIOS 2.8 present. Jan 16 08:57:08.999977 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 08:57:08.999994 kernel: Hypervisor detected: KVM Jan 16 08:57:09.000013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 08:57:09.000027 kernel: kvm-clock: using sched offset of 3975776511 cycles Jan 16 08:57:09.000043 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 08:57:09.000057 kernel: tsc: Detected 2494.138 MHz processor Jan 16 08:57:09.000069 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 08:57:09.000083 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 08:57:09.000098 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 16 08:57:09.000123 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 08:57:09.000135 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 08:57:09.000152 kernel: ACPI: Early table checksum verification disabled Jan 16 08:57:09.000164 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 16 08:57:09.000215 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000228 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000237 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000245 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 08:57:09.000253 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000261 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000269 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000282 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:57:09.000290 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 08:57:09.000298 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 08:57:09.000306 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 08:57:09.000314 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 08:57:09.000322 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 08:57:09.000330 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 08:57:09.000348 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 08:57:09.000356 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 08:57:09.000364 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 08:57:09.000373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 08:57:09.000382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 08:57:09.000391 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 16 08:57:09.000399 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 16 08:57:09.000412 kernel: Zone ranges: Jan 16 08:57:09.000420 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 08:57:09.000429 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 16 08:57:09.000437 kernel: Normal empty Jan 16 08:57:09.000446 kernel: Movable zone start for each node Jan 16 08:57:09.000454 kernel: Early memory node ranges Jan 16 08:57:09.000463 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 08:57:09.000471 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 16 08:57:09.000480 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 16 08:57:09.000496 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 08:57:09.000505 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 08:57:09.000514 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 16 08:57:09.000523 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 08:57:09.000531 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 08:57:09.000540 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 08:57:09.000548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 08:57:09.000557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 08:57:09.000565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 08:57:09.000578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 08:57:09.000587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 08:57:09.000595 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 08:57:09.000604 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 08:57:09.000612 kernel: TSC deadline timer available Jan 16 08:57:09.000621 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 08:57:09.000629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 08:57:09.000637 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 08:57:09.000646 kernel: Booting paravirtualized kernel on KVM Jan 16 08:57:09.000659 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 08:57:09.000668 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 08:57:09.000677 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 08:57:09.000685 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 08:57:09.000693 kernel: pcpu-alloc: [0] 0 1 Jan 16 08:57:09.000702 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 08:57:09.000712 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:09.000721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 08:57:09.000733 kernel: random: crng init done Jan 16 08:57:09.000742 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 08:57:09.000750 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 08:57:09.000759 kernel: Fallback order for Node 0: 0 Jan 16 08:57:09.000767 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 16 08:57:09.000776 kernel: Policy zone: DMA32 Jan 16 08:57:09.000784 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 08:57:09.000793 kernel: Memory: 1971188K/2096600K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 16 08:57:09.000802 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 08:57:09.000814 kernel: Kernel/User page tables isolation: enabled Jan 16 08:57:09.000823 kernel: ftrace: allocating 37920 entries in 149 pages Jan 16 08:57:09.000831 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 08:57:09.000840 kernel: Dynamic Preempt: voluntary Jan 16 08:57:09.000848 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 08:57:09.000858 kernel: rcu: RCU event tracing is enabled. Jan 16 08:57:09.000867 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 08:57:09.000875 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 08:57:09.000884 kernel: Rude variant of Tasks RCU enabled. Jan 16 08:57:09.000897 kernel: Tracing variant of Tasks RCU enabled. Jan 16 08:57:09.000905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 08:57:09.000914 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 08:57:09.000922 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 08:57:09.000931 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 08:57:09.000940 kernel: Console: colour VGA+ 80x25 Jan 16 08:57:09.000948 kernel: printk: console [tty0] enabled Jan 16 08:57:09.000957 kernel: printk: console [ttyS0] enabled Jan 16 08:57:09.000965 kernel: ACPI: Core revision 20230628 Jan 16 08:57:09.000974 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 08:57:09.000987 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 08:57:09.000995 kernel: x2apic enabled Jan 16 08:57:09.001004 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 08:57:09.001012 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 08:57:09.001021 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 08:57:09.001030 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 16 08:57:09.001039 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 08:57:09.001047 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 08:57:09.001070 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 08:57:09.001079 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 08:57:09.001089 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 08:57:09.001101 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 08:57:09.001110 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 08:57:09.001120 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 08:57:09.001129 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 08:57:09.001138 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 08:57:09.001147 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 08:57:09.001160 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 08:57:09.001170 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 08:57:09.001207 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 08:57:09.001216 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 08:57:09.001225 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 08:57:09.001235 kernel: Freeing SMP alternatives memory: 32K Jan 16 08:57:09.001244 kernel: pid_max: default: 32768 minimum: 301 Jan 16 08:57:09.001253 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 08:57:09.001267 kernel: landlock: Up and running. Jan 16 08:57:09.001276 kernel: SELinux: Initializing. Jan 16 08:57:09.001285 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:57:09.001294 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:57:09.001303 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 08:57:09.001313 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:09.001322 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:09.001332 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:57:09.001345 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 08:57:09.001355 kernel: signal: max sigframe size: 1776 Jan 16 08:57:09.001364 kernel: rcu: Hierarchical SRCU implementation. Jan 16 08:57:09.001374 kernel: rcu: Max phase no-delay instances is 400. Jan 16 08:57:09.001383 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 08:57:09.001392 kernel: smp: Bringing up secondary CPUs ... Jan 16 08:57:09.001401 kernel: smpboot: x86: Booting SMP configuration: Jan 16 08:57:09.001410 kernel: .... node #0, CPUs: #1 Jan 16 08:57:09.001419 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 08:57:09.001429 kernel: smpboot: Max logical packages: 1 Jan 16 08:57:09.001446 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 16 08:57:09.001460 kernel: devtmpfs: initialized Jan 16 08:57:09.001474 kernel: x86/mm: Memory block size: 128MB Jan 16 08:57:09.001488 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 08:57:09.001502 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 08:57:09.001516 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 08:57:09.001530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 08:57:09.001543 kernel: audit: initializing netlink subsys (disabled) Jan 16 08:57:09.001553 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 08:57:09.001567 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 08:57:09.001576 kernel: audit: type=2000 audit(1737017827.947:1): state=initialized audit_enabled=0 res=1 Jan 16 08:57:09.001585 kernel: cpuidle: using governor menu Jan 16 08:57:09.001595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 08:57:09.001606 kernel: dca service started, version 1.12.1 Jan 16 08:57:09.001615 kernel: PCI: Using configuration type 1 for base access Jan 16 08:57:09.001624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 08:57:09.001633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 08:57:09.001642 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 08:57:09.001655 kernel: ACPI: Added _OSI(Module Device) Jan 16 08:57:09.001665 kernel: ACPI: Added _OSI(Processor Device) Jan 16 08:57:09.001674 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 08:57:09.001685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 08:57:09.001697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 08:57:09.001706 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 08:57:09.001715 kernel: ACPI: Interpreter enabled Jan 16 08:57:09.001724 kernel: ACPI: PM: (supports S0 S5) Jan 16 08:57:09.001734 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 08:57:09.001747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 08:57:09.001756 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 08:57:09.001766 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 08:57:09.001775 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 08:57:09.002018 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 08:57:09.002131 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 08:57:09.002254 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 08:57:09.002274 kernel: acpiphp: Slot [3] registered Jan 16 08:57:09.002284 kernel: acpiphp: Slot [4] registered Jan 16 08:57:09.002293 kernel: acpiphp: Slot [5] registered Jan 16 08:57:09.002303 kernel: acpiphp: Slot [6] registered Jan 16 08:57:09.002312 kernel: acpiphp: Slot [7] registered Jan 16 08:57:09.002321 kernel: acpiphp: Slot [8] registered Jan 16 08:57:09.002330 kernel: acpiphp: Slot [9] registered Jan 16 08:57:09.002340 kernel: acpiphp: Slot [10] registered Jan 16 08:57:09.002349 kernel: acpiphp: Slot [11] registered Jan 16 08:57:09.002362 kernel: acpiphp: Slot [12] registered Jan 16 08:57:09.002371 kernel: acpiphp: Slot [13] registered Jan 16 08:57:09.002380 kernel: acpiphp: Slot [14] registered Jan 16 08:57:09.002390 kernel: acpiphp: Slot [15] registered Jan 16 08:57:09.002399 kernel: acpiphp: Slot [16] registered Jan 16 08:57:09.002408 kernel: acpiphp: Slot [17] registered Jan 16 08:57:09.002418 kernel: acpiphp: Slot [18] registered Jan 16 08:57:09.002427 kernel: acpiphp: Slot [19] registered Jan 16 08:57:09.002436 kernel: acpiphp: Slot [20] registered Jan 16 08:57:09.002445 kernel: acpiphp: Slot [21] registered Jan 16 08:57:09.002458 kernel: acpiphp: Slot [22] registered Jan 16 08:57:09.002468 kernel: acpiphp: Slot [23] registered Jan 16 08:57:09.002477 kernel: acpiphp: Slot [24] registered Jan 16 08:57:09.002486 kernel: acpiphp: Slot [25] registered Jan 16 08:57:09.002495 kernel: acpiphp: Slot [26] registered Jan 16 08:57:09.002504 kernel: acpiphp: Slot [27] registered Jan 16 08:57:09.002513 kernel: acpiphp: Slot [28] registered Jan 16 08:57:09.002522 kernel: acpiphp: Slot [29] registered Jan 16 08:57:09.002531 kernel: acpiphp: Slot [30] registered Jan 16 08:57:09.002544 kernel: acpiphp: Slot [31] registered Jan 16 08:57:09.002557 kernel: PCI host bridge to bus 0000:00 Jan 16 08:57:09.002695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 08:57:09.002793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 08:57:09.002887 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 08:57:09.002976 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 08:57:09.003063 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 08:57:09.003151 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 08:57:09.003313 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 08:57:09.003519 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 08:57:09.003637 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 08:57:09.003746 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 08:57:09.003886 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 08:57:09.004047 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 08:57:09.004927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 08:57:09.005123 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 08:57:09.005345 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 08:57:09.005502 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 08:57:09.005669 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 08:57:09.005814 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 08:57:09.005928 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 08:57:09.006052 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 08:57:09.006159 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 08:57:09.009404 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 08:57:09.009534 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 08:57:09.009638 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 08:57:09.009744 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 08:57:09.009868 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:57:09.009971 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 08:57:09.010072 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 08:57:09.010184 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 08:57:09.010312 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:57:09.010414 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 08:57:09.010520 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 08:57:09.010621 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 08:57:09.010762 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 08:57:09.010892 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 08:57:09.011112 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 08:57:09.011445 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 08:57:09.011786 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:57:09.011954 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 08:57:09.012058 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 08:57:09.013233 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 08:57:09.013401 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:57:09.013507 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 08:57:09.013609 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 08:57:09.013708 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 08:57:09.013819 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 08:57:09.013928 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 08:57:09.014055 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 08:57:09.014074 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 08:57:09.014085 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 08:57:09.014095 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 08:57:09.014104 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 08:57:09.014119 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 08:57:09.014129 kernel: iommu: Default domain type: Translated Jan 16 08:57:09.014139 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 08:57:09.014148 kernel: PCI: Using ACPI for IRQ routing Jan 16 08:57:09.014157 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 08:57:09.014167 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 08:57:09.014467 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 16 08:57:09.014598 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 08:57:09.014702 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 08:57:09.014812 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 08:57:09.014826 kernel: vgaarb: loaded Jan 16 08:57:09.014835 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 08:57:09.014845 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 08:57:09.014855 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 08:57:09.014865 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 08:57:09.014875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 08:57:09.014884 kernel: pnp: PnP ACPI init Jan 16 08:57:09.014894 kernel: pnp: PnP ACPI: found 4 devices Jan 16 08:57:09.014909 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 08:57:09.014918 kernel: NET: Registered PF_INET protocol family Jan 16 08:57:09.014928 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 08:57:09.014938 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 08:57:09.014948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 08:57:09.014961 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 08:57:09.014974 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 08:57:09.014987 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 08:57:09.015000 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:57:09.015020 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:57:09.015033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 08:57:09.015049 kernel: NET: Registered PF_XDP protocol family Jan 16 08:57:09.016339 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 08:57:09.016523 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 08:57:09.016643 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 08:57:09.016749 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 08:57:09.016890 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 08:57:09.017045 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 08:57:09.017268 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 08:57:09.017290 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 08:57:09.017448 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31777 usecs Jan 16 08:57:09.017472 kernel: PCI: CLS 0 bytes, default 64 Jan 16 08:57:09.017490 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 08:57:09.017500 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 16 08:57:09.017510 kernel: Initialise system trusted keyrings Jan 16 08:57:09.017529 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 08:57:09.017539 kernel: Key type asymmetric registered Jan 16 08:57:09.017548 kernel: Asymmetric key parser 'x509' registered Jan 16 08:57:09.017557 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 08:57:09.017567 kernel: io scheduler mq-deadline registered Jan 16 08:57:09.017576 kernel: io scheduler kyber registered Jan 16 08:57:09.017590 kernel: io scheduler bfq registered Jan 16 08:57:09.017605 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 08:57:09.017621 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 08:57:09.017642 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 08:57:09.017661 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 08:57:09.017673 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 08:57:09.017683 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 08:57:09.017692 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 08:57:09.017702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 08:57:09.017712 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 08:57:09.017722 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 08:57:09.017907 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 08:57:09.018061 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 08:57:09.018254 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T08:57:08 UTC (1737017828) Jan 16 08:57:09.018397 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 08:57:09.018417 kernel: intel_pstate: CPU model not supported Jan 16 08:57:09.018433 kernel: NET: Registered PF_INET6 protocol family Jan 16 08:57:09.018449 kernel: Segment Routing with IPv6 Jan 16 08:57:09.018465 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 08:57:09.018480 kernel: NET: Registered PF_PACKET protocol family Jan 16 08:57:09.018508 kernel: Key type dns_resolver registered Jan 16 08:57:09.018526 kernel: IPI shorthand broadcast: enabled Jan 16 08:57:09.018544 kernel: sched_clock: Marking stable (984079972, 95912885)->(1105750151, -25757294) Jan 16 08:57:09.018561 kernel: registered taskstats version 1 Jan 16 08:57:09.018579 kernel: Loading compiled-in X.509 certificates Jan 16 08:57:09.018596 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 16 08:57:09.018611 kernel: Key type .fscrypt registered Jan 16 08:57:09.018628 kernel: Key type fscrypt-provisioning registered Jan 16 08:57:09.018645 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 08:57:09.018667 kernel: ima: Allocated hash algorithm: sha1 Jan 16 08:57:09.018685 kernel: ima: No architecture policies found Jan 16 08:57:09.018703 kernel: clk: Disabling unused clocks Jan 16 08:57:09.018721 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 16 08:57:09.018739 kernel: Write protecting the kernel read-only data: 36864k Jan 16 08:57:09.018791 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 16 08:57:09.018811 kernel: Run /init as init process Jan 16 08:57:09.018827 kernel: with arguments: Jan 16 08:57:09.018844 kernel: /init Jan 16 08:57:09.018864 kernel: with environment: Jan 16 08:57:09.018881 kernel: HOME=/ Jan 16 08:57:09.018895 kernel: TERM=linux Jan 16 08:57:09.018912 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 08:57:09.018935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:57:09.018956 systemd[1]: Detected virtualization kvm. Jan 16 08:57:09.018976 systemd[1]: Detected architecture x86-64. Jan 16 08:57:09.018995 systemd[1]: Running in initrd. Jan 16 08:57:09.019017 systemd[1]: No hostname configured, using default hostname. Jan 16 08:57:09.019033 systemd[1]: Hostname set to . Jan 16 08:57:09.019048 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:57:09.019063 systemd[1]: Queued start job for default target initrd.target. Jan 16 08:57:09.019081 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:09.019097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:09.019114 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 08:57:09.019128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:57:09.019149 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 08:57:09.019164 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 08:57:09.020289 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 08:57:09.020312 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 08:57:09.020329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:09.020345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:09.020371 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:57:09.020388 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:57:09.020405 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:57:09.020424 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:57:09.020439 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:57:09.020454 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:57:09.020489 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 08:57:09.020505 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 08:57:09.020525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:09.020544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:09.020564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:09.020583 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:57:09.020602 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 08:57:09.020622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:57:09.020647 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 08:57:09.020665 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 08:57:09.020682 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:57:09.020699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:57:09.020717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:09.020737 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 08:57:09.020756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:09.020833 systemd-journald[183]: Collecting audit messages is disabled. Jan 16 08:57:09.020882 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 08:57:09.020902 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:57:09.020929 systemd-journald[183]: Journal started Jan 16 08:57:09.020970 systemd-journald[183]: Runtime Journal (/run/log/journal/1a6370eb4b764e1e9ff9b892c782a3ff) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:57:09.011721 systemd-modules-load[184]: Inserted module 'overlay' Jan 16 08:57:09.025698 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:57:09.034778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:57:09.091730 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 08:57:09.091806 kernel: Bridge firewalling registered Jan 16 08:57:09.045829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:57:09.061042 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 16 08:57:09.088021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:09.094740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:09.103489 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:09.107997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:57:09.110965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:57:09.112860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:09.138156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:09.147625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:57:09.149002 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:09.150325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:09.158677 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 08:57:09.191216 dracut-cmdline[219]: dracut-dracut-053 Jan 16 08:57:09.200308 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 08:57:09.215748 systemd-resolved[217]: Positive Trust Anchors: Jan 16 08:57:09.215769 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:57:09.215835 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:57:09.225617 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 16 08:57:09.229515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:57:09.230963 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:09.312248 kernel: SCSI subsystem initialized Jan 16 08:57:09.322217 kernel: Loading iSCSI transport class v2.0-870. Jan 16 08:57:09.333212 kernel: iscsi: registered transport (tcp) Jan 16 08:57:09.356208 kernel: iscsi: registered transport (qla4xxx) Jan 16 08:57:09.356283 kernel: QLogic iSCSI HBA Driver Jan 16 08:57:09.408634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 08:57:09.414458 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 08:57:09.443554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 08:57:09.443624 kernel: device-mapper: uevent: version 1.0.3 Jan 16 08:57:09.444957 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 08:57:09.487217 kernel: raid6: avx2x4 gen() 16867 MB/s Jan 16 08:57:09.504219 kernel: raid6: avx2x2 gen() 15325 MB/s Jan 16 08:57:09.521486 kernel: raid6: avx2x1 gen() 12602 MB/s Jan 16 08:57:09.521562 kernel: raid6: using algorithm avx2x4 gen() 16867 MB/s Jan 16 08:57:09.539492 kernel: raid6: .... xor() 9298 MB/s, rmw enabled Jan 16 08:57:09.539575 kernel: raid6: using avx2x2 recovery algorithm Jan 16 08:57:09.560229 kernel: xor: automatically using best checksumming function avx Jan 16 08:57:09.721228 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 08:57:09.735500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:57:09.741430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:09.758538 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 08:57:09.764004 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:09.772373 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 08:57:09.789273 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 16 08:57:09.829380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:57:09.834412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:57:09.890806 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:09.897436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 08:57:09.923517 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 08:57:09.926168 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:57:09.927800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:09.928724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:57:09.934391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 08:57:09.960780 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:57:09.992201 kernel: scsi host0: Virtio SCSI HBA Jan 16 08:57:09.997200 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 08:57:10.057014 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 08:57:10.057041 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 08:57:10.057269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 08:57:10.057290 kernel: GPT:9289727 != 125829119 Jan 16 08:57:10.057307 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 08:57:10.057325 kernel: GPT:9289727 != 125829119 Jan 16 08:57:10.057355 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 08:57:10.057371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:10.057388 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 08:57:10.057406 kernel: AES CTR mode by8 optimization enabled Jan 16 08:57:10.044859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:57:10.045017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:10.060988 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 08:57:10.085367 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 16 08:57:10.085554 kernel: libata version 3.00 loaded. Jan 16 08:57:10.045755 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:10.046357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:10.046528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:10.047002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:10.056573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:10.101810 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 08:57:10.187686 kernel: ACPI: bus type USB registered Jan 16 08:57:10.187714 kernel: usbcore: registered new interface driver usbfs Jan 16 08:57:10.187733 kernel: usbcore: registered new interface driver hub Jan 16 08:57:10.187751 kernel: usbcore: registered new device driver usb Jan 16 08:57:10.187768 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (452) Jan 16 08:57:10.187786 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (450) Jan 16 08:57:10.187805 kernel: scsi host1: ata_piix Jan 16 08:57:10.188021 kernel: scsi host2: ata_piix Jan 16 08:57:10.188242 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 08:57:10.188323 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 08:57:10.169428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 08:57:10.191155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:10.202085 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 08:57:10.209640 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 08:57:10.209904 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 08:57:10.210076 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 08:57:10.210276 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 08:57:10.210456 kernel: hub 1-0:1.0: USB hub found Jan 16 08:57:10.210652 kernel: hub 1-0:1.0: 2 ports detected Jan 16 08:57:10.218370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:57:10.223466 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 08:57:10.224044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 08:57:10.231394 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 08:57:10.249511 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:57:10.258950 disk-uuid[542]: Primary Header is updated. Jan 16 08:57:10.258950 disk-uuid[542]: Secondary Entries is updated. Jan 16 08:57:10.258950 disk-uuid[542]: Secondary Header is updated. Jan 16 08:57:10.270212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:10.274824 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:11.285203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:57:11.286403 disk-uuid[545]: The operation has completed successfully. Jan 16 08:57:11.335867 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 08:57:11.336052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 08:57:11.361463 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 08:57:11.365003 sh[564]: Success Jan 16 08:57:11.379203 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 08:57:11.434067 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 08:57:11.452409 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 08:57:11.455869 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 08:57:11.491240 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 16 08:57:11.491314 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:11.491335 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 08:57:11.491348 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 08:57:11.491360 kernel: BTRFS info (device dm-0): using free space tree Jan 16 08:57:11.500516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 08:57:11.501746 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 08:57:11.512465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 08:57:11.516516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 08:57:11.527243 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:11.527335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:11.527350 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:11.531202 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:11.544207 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:11.544482 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 08:57:11.554042 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 08:57:11.562426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 08:57:11.658142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:57:11.667425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:57:11.696348 systemd-networkd[747]: lo: Link UP Jan 16 08:57:11.697209 systemd-networkd[747]: lo: Gained carrier Jan 16 08:57:11.701115 systemd-networkd[747]: Enumeration completed Jan 16 08:57:11.703656 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:57:11.704061 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:57:11.704067 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 08:57:11.704330 systemd[1]: Reached target network.target - Network. Jan 16 08:57:11.705807 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:57:11.705812 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 08:57:11.708361 systemd-networkd[747]: eth0: Link UP Jan 16 08:57:11.708367 systemd-networkd[747]: eth0: Gained carrier Jan 16 08:57:11.708382 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:57:11.714700 systemd-networkd[747]: eth1: Link UP Jan 16 08:57:11.714706 systemd-networkd[747]: eth1: Gained carrier Jan 16 08:57:11.714724 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:57:11.728695 ignition[658]: Ignition 2.20.0 Jan 16 08:57:11.728712 ignition[658]: Stage: fetch-offline Jan 16 08:57:11.730365 systemd-networkd[747]: eth0: DHCPv4 address 64.227.106.156/20, gateway 64.227.96.1 acquired from 169.254.169.253 Jan 16 08:57:11.728766 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:11.730635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:57:11.728779 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:11.734301 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Jan 16 08:57:11.728909 ignition[658]: parsed url from cmdline: "" Jan 16 08:57:11.728916 ignition[658]: no config URL provided Jan 16 08:57:11.728927 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:57:11.728938 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:57:11.728951 ignition[658]: failed to fetch config: resource requires networking Jan 16 08:57:11.729416 ignition[658]: Ignition finished successfully Jan 16 08:57:11.741548 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 08:57:11.766073 ignition[755]: Ignition 2.20.0 Jan 16 08:57:11.766090 ignition[755]: Stage: fetch Jan 16 08:57:11.766365 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:11.766379 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:11.766536 ignition[755]: parsed url from cmdline: "" Jan 16 08:57:11.766542 ignition[755]: no config URL provided Jan 16 08:57:11.766550 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:57:11.766561 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:57:11.766604 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 08:57:11.786999 ignition[755]: GET result: OK Jan 16 08:57:11.787197 ignition[755]: parsing config with SHA512: 72186d8fd500dccf3549913343d833796cf2f373aa2286cfe718d4a78eb669f37c76757c0ab6a745eeec714ad3873859b77b15a95ab8617a8bf43129dd241e69 Jan 16 08:57:11.797299 unknown[755]: fetched base config from "system" Jan 16 08:57:11.797981 unknown[755]: fetched base config from "system" Jan 16 08:57:11.798424 unknown[755]: fetched user config from "digitalocean" Jan 16 08:57:11.799286 ignition[755]: fetch: fetch complete Jan 16 08:57:11.799697 ignition[755]: fetch: fetch passed Jan 16 08:57:11.799764 ignition[755]: Ignition finished successfully Jan 16 08:57:11.802755 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 08:57:11.808534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 08:57:11.831890 ignition[763]: Ignition 2.20.0 Jan 16 08:57:11.831899 ignition[763]: Stage: kargs Jan 16 08:57:11.832262 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:11.832276 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:11.833521 ignition[763]: kargs: kargs passed Jan 16 08:57:11.834911 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 08:57:11.833584 ignition[763]: Ignition finished successfully Jan 16 08:57:11.842503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 08:57:11.862497 ignition[769]: Ignition 2.20.0 Jan 16 08:57:11.862509 ignition[769]: Stage: disks Jan 16 08:57:11.862736 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:11.865148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 08:57:11.862750 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:11.863924 ignition[769]: disks: disks passed Jan 16 08:57:11.866570 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 08:57:11.863979 ignition[769]: Ignition finished successfully Jan 16 08:57:11.871689 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 08:57:11.872480 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:57:11.873476 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:57:11.874264 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:57:11.880437 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 08:57:11.898271 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 08:57:11.902426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 08:57:11.909343 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 08:57:12.038199 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 16 08:57:12.038819 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 08:57:12.040228 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 08:57:12.049390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:57:12.053364 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 08:57:12.057900 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 16 08:57:12.060228 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (785) Jan 16 08:57:12.063200 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:12.063248 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:12.063261 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:12.063341 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 08:57:12.071927 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:12.072627 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 08:57:12.072678 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:57:12.079387 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:57:12.080372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 08:57:12.090399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 08:57:12.168987 coreos-metadata[788]: Jan 16 08:57:12.168 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:12.172335 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 08:57:12.176094 coreos-metadata[787]: Jan 16 08:57:12.176 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:12.180339 coreos-metadata[788]: Jan 16 08:57:12.180 INFO Fetch successful Jan 16 08:57:12.181742 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 16 08:57:12.185778 coreos-metadata[788]: Jan 16 08:57:12.185 INFO wrote hostname ci-4152.2.0-e-9b059e58c2 to /sysroot/etc/hostname Jan 16 08:57:12.187737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:57:12.188843 coreos-metadata[787]: Jan 16 08:57:12.188 INFO Fetch successful Jan 16 08:57:12.191707 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 08:57:12.193237 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 16 08:57:12.193379 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 16 08:57:12.198097 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 08:57:12.308568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 08:57:12.314363 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 08:57:12.316420 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 08:57:12.329221 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:12.362035 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 08:57:12.363167 ignition[906]: INFO : Ignition 2.20.0 Jan 16 08:57:12.363167 ignition[906]: INFO : Stage: mount Jan 16 08:57:12.364350 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:12.364350 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:12.365279 ignition[906]: INFO : mount: mount passed Jan 16 08:57:12.365279 ignition[906]: INFO : Ignition finished successfully Jan 16 08:57:12.366016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 08:57:12.373348 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 08:57:12.486881 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 08:57:12.495483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:57:12.505213 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (917) Jan 16 08:57:12.508443 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 08:57:12.508511 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:57:12.508525 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:57:12.513343 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:57:12.514876 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:57:12.542196 ignition[934]: INFO : Ignition 2.20.0 Jan 16 08:57:12.542196 ignition[934]: INFO : Stage: files Jan 16 08:57:12.542196 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:12.542196 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:12.544215 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 16 08:57:12.545151 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 08:57:12.545151 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 08:57:12.547705 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 08:57:12.548529 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 08:57:12.549225 unknown[934]: wrote ssh authorized keys file for user: core Jan 16 08:57:12.549902 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 08:57:12.551760 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:57:12.552673 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 08:57:12.582660 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 08:57:12.648966 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:57:12.649765 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:57:12.649765 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 16 08:57:12.936393 systemd-networkd[747]: eth1: Gained IPv6LL Jan 16 08:57:13.088583 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 08:57:13.128703 systemd-networkd[747]: eth0: Gained IPv6LL Jan 16 08:57:13.165655 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:57:13.165655 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 08:57:13.167677 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 16 08:57:13.600542 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 16 08:57:13.856483 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 16 08:57:13.856483 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 08:57:13.857963 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:57:13.857963 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:57:13.857963 ignition[934]: INFO : files: files passed Jan 16 08:57:13.857963 ignition[934]: INFO : Ignition finished successfully Jan 16 08:57:13.859034 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 08:57:13.865518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 08:57:13.867965 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 08:57:13.873962 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 08:57:13.874089 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 08:57:13.895202 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:13.895202 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:13.898623 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:57:13.899944 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:57:13.901266 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 08:57:13.908435 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 08:57:13.950108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 08:57:13.950257 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 08:57:13.951303 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 08:57:13.951972 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 08:57:13.953141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 08:57:13.959393 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 08:57:13.973973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:57:13.986505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 08:57:13.996932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:13.998154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:13.999136 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 08:57:13.999582 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 08:57:13.999725 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:57:14.001263 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 08:57:14.002264 systemd[1]: Stopped target basic.target - Basic System. Jan 16 08:57:14.003126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 08:57:14.004003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:57:14.004644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 08:57:14.005372 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 08:57:14.006666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:57:14.007542 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 08:57:14.008542 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 08:57:14.009283 systemd[1]: Stopped target swap.target - Swaps. Jan 16 08:57:14.009861 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 08:57:14.009992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:57:14.010875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:14.011823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:14.012663 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 08:57:14.012861 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:14.013488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 08:57:14.013645 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 08:57:14.014929 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 08:57:14.015130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:57:14.016905 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 08:57:14.017095 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 08:57:14.017701 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 08:57:14.017817 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:57:14.024594 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 08:57:14.025070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 08:57:14.025307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:14.029342 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 08:57:14.029818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 08:57:14.029958 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:14.032696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 08:57:14.033516 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:57:14.042479 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 08:57:14.042584 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 08:57:14.049990 ignition[987]: INFO : Ignition 2.20.0 Jan 16 08:57:14.049990 ignition[987]: INFO : Stage: umount Jan 16 08:57:14.051518 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:57:14.051518 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:57:14.051518 ignition[987]: INFO : umount: umount passed Jan 16 08:57:14.051518 ignition[987]: INFO : Ignition finished successfully Jan 16 08:57:14.055496 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 08:57:14.055601 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 08:57:14.056849 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 08:57:14.056938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 08:57:14.057391 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 08:57:14.057433 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 08:57:14.058759 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 08:57:14.058798 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 08:57:14.060498 systemd[1]: Stopped target network.target - Network. Jan 16 08:57:14.061163 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 08:57:14.061304 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:57:14.063642 systemd[1]: Stopped target paths.target - Path Units. Jan 16 08:57:14.064637 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 08:57:14.064687 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:14.065462 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 08:57:14.066258 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 08:57:14.066979 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 08:57:14.067042 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:57:14.068852 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 08:57:14.068898 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:57:14.069549 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 08:57:14.069611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 08:57:14.070216 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 08:57:14.070265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 08:57:14.070992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 08:57:14.080394 systemd-networkd[747]: eth0: DHCPv6 lease lost Jan 16 08:57:14.082695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 08:57:14.084688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 08:57:14.086673 systemd-networkd[747]: eth1: DHCPv6 lease lost Jan 16 08:57:14.092533 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 08:57:14.092693 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 08:57:14.095025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 08:57:14.095070 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:14.101471 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 08:57:14.123208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 08:57:14.123320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:57:14.124633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:14.128328 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 08:57:14.128494 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 08:57:14.141341 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:57:14.141463 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:14.144572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 08:57:14.144634 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:14.146357 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 08:57:14.146425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:14.154857 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 08:57:14.155039 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:14.155882 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 08:57:14.155967 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 08:57:14.157217 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 08:57:14.157321 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 08:57:14.159131 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 08:57:14.159494 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:14.159906 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 08:57:14.159942 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:14.160738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 08:57:14.160803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:57:14.161928 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 08:57:14.161986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 08:57:14.162816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:57:14.162863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:57:14.163907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 08:57:14.163955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 08:57:14.170453 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 08:57:14.170881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 08:57:14.170939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:14.173918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:14.173987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:14.180539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 08:57:14.180706 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 08:57:14.182436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 08:57:14.189416 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 08:57:14.198730 systemd[1]: Switching root. Jan 16 08:57:14.233107 systemd-journald[183]: Journal stopped Jan 16 08:57:15.348265 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 16 08:57:15.348344 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 08:57:15.348359 kernel: SELinux: policy capability open_perms=1 Jan 16 08:57:15.348372 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 08:57:15.348384 kernel: SELinux: policy capability always_check_network=0 Jan 16 08:57:15.348395 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 08:57:15.348408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 08:57:15.348424 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 08:57:15.348442 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 08:57:15.348454 kernel: audit: type=1403 audit(1737017834.411:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 08:57:15.348471 systemd[1]: Successfully loaded SELinux policy in 37.512ms. Jan 16 08:57:15.348490 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.187ms. Jan 16 08:57:15.348507 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:57:15.348522 systemd[1]: Detected virtualization kvm. Jan 16 08:57:15.348535 systemd[1]: Detected architecture x86-64. Jan 16 08:57:15.348547 systemd[1]: Detected first boot. Jan 16 08:57:15.348564 systemd[1]: Hostname set to . Jan 16 08:57:15.348577 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:57:15.348590 zram_generator::config[1031]: No configuration found. Jan 16 08:57:15.348605 systemd[1]: Populated /etc with preset unit settings. Jan 16 08:57:15.348617 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 08:57:15.348629 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 08:57:15.348642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 08:57:15.348660 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 08:57:15.348675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 08:57:15.348688 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 08:57:15.348704 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 08:57:15.348718 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 08:57:15.348730 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 08:57:15.348743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 08:57:15.348755 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 08:57:15.348768 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:57:15.348781 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:57:15.348797 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 08:57:15.348810 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 08:57:15.348823 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 08:57:15.348836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:57:15.348849 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 08:57:15.348862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:57:15.348874 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 08:57:15.348890 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 08:57:15.348903 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 08:57:15.348916 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 08:57:15.348928 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:57:15.348944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:57:15.348956 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:57:15.348969 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:57:15.348982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 08:57:15.348997 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 08:57:15.349010 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:57:15.349023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:57:15.349035 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:57:15.349047 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 08:57:15.349060 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 08:57:15.349072 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 08:57:15.349086 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 08:57:15.349099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:15.349114 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 08:57:15.349127 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 08:57:15.349139 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 08:57:15.349153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 08:57:15.349165 systemd[1]: Reached target machines.target - Containers. Jan 16 08:57:15.349194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 08:57:15.349208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:15.349220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:57:15.349233 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 08:57:15.349249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:15.349263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:57:15.349276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:15.349289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 08:57:15.349302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:15.349316 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 08:57:15.349329 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 08:57:15.349341 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 08:57:15.349357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 08:57:15.349370 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 08:57:15.349383 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:57:15.349395 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:57:15.349407 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 08:57:15.349420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 08:57:15.349432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:57:15.349445 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 08:57:15.349458 systemd[1]: Stopped verity-setup.service. Jan 16 08:57:15.349473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:15.349485 kernel: fuse: init (API version 7.39) Jan 16 08:57:15.349498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 08:57:15.349512 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 08:57:15.349525 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 08:57:15.349537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 08:57:15.349553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 08:57:15.349565 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 08:57:15.349578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:57:15.349590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:15.349603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:15.349619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:15.349635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:15.349647 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 08:57:15.349660 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 08:57:15.349673 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 08:57:15.349685 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 08:57:15.349698 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 08:57:15.349711 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 08:57:15.349767 systemd-journald[1100]: Collecting audit messages is disabled. Jan 16 08:57:15.349792 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 08:57:15.349805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 08:57:15.349818 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:57:15.349830 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 08:57:15.349844 systemd-journald[1100]: Journal started Jan 16 08:57:15.349872 systemd-journald[1100]: Runtime Journal (/run/log/journal/1a6370eb4b764e1e9ff9b892c782a3ff) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:57:15.013685 systemd[1]: Queued start job for default target multi-user.target. Jan 16 08:57:15.033764 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 08:57:15.034282 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 08:57:15.370194 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 08:57:15.379355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 08:57:15.379443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:15.386639 kernel: loop: module loaded Jan 16 08:57:15.386731 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 08:57:15.390989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:15.400507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 08:57:15.400601 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 08:57:15.412087 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:57:15.408903 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:15.409093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:15.409829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:57:15.410483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 08:57:15.411118 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 08:57:15.411636 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 08:57:15.425004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 08:57:15.445025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 08:57:15.453535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 08:57:15.454062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:15.464600 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:57:15.470273 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 08:57:15.472535 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 08:57:15.496433 kernel: loop0: detected capacity change from 0 to 138184 Jan 16 08:57:15.482500 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 08:57:15.539509 systemd-journald[1100]: Time spent on flushing to /var/log/journal/1a6370eb4b764e1e9ff9b892c782a3ff is 140.968ms for 988 entries. Jan 16 08:57:15.539509 systemd-journald[1100]: System Journal (/var/log/journal/1a6370eb4b764e1e9ff9b892c782a3ff) is 8.0M, max 195.6M, 187.6M free. Jan 16 08:57:15.725075 systemd-journald[1100]: Received client request to flush runtime journal. Jan 16 08:57:15.725146 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 08:57:15.725193 kernel: loop1: detected capacity change from 0 to 8 Jan 16 08:57:15.725221 kernel: ACPI: bus type drm_connector registered Jan 16 08:57:15.725241 kernel: loop2: detected capacity change from 0 to 140992 Jan 16 08:57:15.588254 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 08:57:15.593155 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 08:57:15.609571 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:57:15.609774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:57:15.613727 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 08:57:15.631451 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 08:57:15.676350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:57:15.687740 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 08:57:15.704272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:57:15.732488 kernel: loop3: detected capacity change from 0 to 205544 Jan 16 08:57:15.730717 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 08:57:15.782791 kernel: loop4: detected capacity change from 0 to 138184 Jan 16 08:57:15.783763 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 08:57:15.802150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:57:15.805026 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 08:57:15.841226 kernel: loop5: detected capacity change from 0 to 8 Jan 16 08:57:15.830925 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 16 08:57:15.830946 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 16 08:57:15.844107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:57:15.855080 kernel: loop6: detected capacity change from 0 to 140992 Jan 16 08:57:15.870283 kernel: loop7: detected capacity change from 0 to 205544 Jan 16 08:57:15.906740 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 08:57:15.907462 (sd-merge)[1172]: Merged extensions into '/usr'. Jan 16 08:57:15.920070 systemd[1]: Reloading requested from client PID 1120 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 08:57:15.920095 systemd[1]: Reloading... Jan 16 08:57:16.078200 zram_generator::config[1206]: No configuration found. Jan 16 08:57:16.270727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:16.325239 ldconfig[1115]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 08:57:16.347653 systemd[1]: Reloading finished in 426 ms. Jan 16 08:57:16.374157 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 08:57:16.375103 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 08:57:16.387510 systemd[1]: Starting ensure-sysext.service... Jan 16 08:57:16.392004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:57:16.409261 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 16 08:57:16.409286 systemd[1]: Reloading... Jan 16 08:57:16.471586 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 08:57:16.473605 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 08:57:16.476823 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 08:57:16.478600 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 16 08:57:16.478749 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 16 08:57:16.489628 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:57:16.489795 systemd-tmpfiles[1247]: Skipping /boot Jan 16 08:57:16.520205 zram_generator::config[1273]: No configuration found. Jan 16 08:57:16.527289 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:57:16.527305 systemd-tmpfiles[1247]: Skipping /boot Jan 16 08:57:16.687623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:16.761841 systemd[1]: Reloading finished in 352 ms. Jan 16 08:57:16.790747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:57:16.805426 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 08:57:16.814475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 08:57:16.819403 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 08:57:16.825357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:57:16.833456 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 08:57:16.842213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.842483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:16.855568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:16.859454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:16.867547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:16.868730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:16.869143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.878588 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 08:57:16.881205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.881444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:16.881641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:16.881752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.888630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.889947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:16.898099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:57:16.899216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:16.899515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:16.913565 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 08:57:16.914682 systemd[1]: Finished ensure-sysext.service. Jan 16 08:57:16.915457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:16.915628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:16.927612 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 08:57:16.952614 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:16.952810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:16.963863 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 08:57:16.964838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:16.965004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:16.965783 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:57:16.965934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:57:16.968410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:16.968528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:16.976547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:57:16.977528 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 08:57:16.978422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 08:57:16.992428 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 08:57:16.992938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:57:16.993210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 08:57:17.043815 augenrules[1364]: No rules Jan 16 08:57:17.046120 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:57:17.047547 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 08:57:17.053415 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 08:57:17.079813 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jan 16 08:57:17.096662 systemd-resolved[1321]: Positive Trust Anchors: Jan 16 08:57:17.098234 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:57:17.098286 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:57:17.106166 systemd-resolved[1321]: Using system hostname 'ci-4152.2.0-e-9b059e58c2'. Jan 16 08:57:17.107928 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 08:57:17.108676 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 08:57:17.109731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:57:17.111064 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:57:17.118566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:57:17.127542 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:57:17.219876 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 08:57:17.226166 systemd-networkd[1376]: lo: Link UP Jan 16 08:57:17.226233 systemd-networkd[1376]: lo: Gained carrier Jan 16 08:57:17.228691 systemd-networkd[1376]: Enumeration completed Jan 16 08:57:17.228816 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:57:17.229468 systemd[1]: Reached target network.target - Network. Jan 16 08:57:17.230168 systemd-networkd[1376]: eth0: Configuring with /run/systemd/network/10-f6:0b:37:a0:b5:ac.network. Jan 16 08:57:17.234762 systemd-networkd[1376]: eth0: Link UP Jan 16 08:57:17.234771 systemd-networkd[1376]: eth0: Gained carrier Jan 16 08:57:17.240046 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 08:57:17.247029 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:17.255821 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 08:57:17.256498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:17.256679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:57:17.264700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:57:17.272395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:57:17.273195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Jan 16 08:57:17.282490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:57:17.285382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:57:17.285435 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:57:17.285453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:57:17.307254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:57:17.307528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:57:17.326523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:57:17.326713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:57:17.332573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:57:17.332757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:57:17.340212 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 08:57:17.343135 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 08:57:17.348688 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:57:17.348761 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:57:17.368583 systemd-networkd[1376]: eth1: Configuring with /run/systemd/network/10-26:b8:bd:a8:24:34.network. Jan 16 08:57:17.368983 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:17.371045 systemd-networkd[1376]: eth1: Link UP Jan 16 08:57:17.371056 systemd-networkd[1376]: eth1: Gained carrier Jan 16 08:57:17.373722 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:17.374521 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:17.382449 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 08:57:17.391203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:57:17.401391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 08:57:17.409460 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 08:57:17.414080 kernel: ACPI: button: Power Button [PWRF] Jan 16 08:57:17.425535 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 08:57:17.433197 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 08:57:17.488037 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 08:57:17.502196 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 08:57:17.502282 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 08:57:17.504523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:17.509201 kernel: Console: switching to colour dummy device 80x25 Jan 16 08:57:17.511212 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 08:57:17.511273 kernel: [drm] features: -context_init Jan 16 08:57:17.513200 kernel: [drm] number of scanouts: 1 Jan 16 08:57:17.513251 kernel: [drm] number of cap sets: 0 Jan 16 08:57:17.516247 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 08:57:17.537201 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 08:57:17.541625 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 08:57:17.551556 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 08:57:17.545130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:17.545385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:17.599603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:17.609004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:57:17.609312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:17.627470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:57:17.653812 kernel: EDAC MC: Ver: 3.0.0 Jan 16 08:57:17.686924 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 08:57:17.692438 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 08:57:17.716235 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:57:17.714297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:57:17.748766 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 08:57:17.749188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:57:17.750382 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:57:17.750657 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 08:57:17.750817 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 08:57:17.751210 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 08:57:17.751479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 08:57:17.751590 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 08:57:17.751673 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 08:57:17.751707 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:57:17.751786 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:57:17.753637 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 08:57:17.755674 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 08:57:17.762283 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 08:57:17.765075 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 08:57:17.768600 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 08:57:17.769867 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:57:17.771271 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:57:17.771882 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:57:17.771918 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:57:17.778347 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 08:57:17.782347 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:57:17.790445 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 08:57:17.797460 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 08:57:17.808353 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 08:57:17.813638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 08:57:17.815101 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 08:57:17.819448 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 08:57:17.825836 jq[1440]: false Jan 16 08:57:17.829454 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 08:57:17.833386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 08:57:17.842693 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 08:57:17.852839 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 08:57:17.853814 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 08:57:17.855380 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 08:57:17.858430 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 08:57:17.869334 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 08:57:17.869879 dbus-daemon[1439]: [system] SELinux support is enabled Jan 16 08:57:17.873894 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 08:57:17.879739 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 08:57:17.889751 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 08:57:17.889984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 08:57:17.902372 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 08:57:17.902426 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 08:57:17.903204 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 08:57:17.904910 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 08:57:17.904952 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 08:57:17.912244 coreos-metadata[1438]: Jan 16 08:57:17.911 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:17.912244 coreos-metadata[1438]: Jan 16 08:57:17.911 INFO Fetch successful Jan 16 08:57:17.941723 jq[1450]: true Jan 16 08:57:17.942614 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 08:57:17.944614 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 08:57:17.962199 extend-filesystems[1443]: Found loop4 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found loop5 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found loop6 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found loop7 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda1 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda2 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda3 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found usr Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda4 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda6 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda7 Jan 16 08:57:17.962199 extend-filesystems[1443]: Found vda9 Jan 16 08:57:18.017931 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 16 08:57:18.003656 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 08:57:18.023461 update_engine[1449]: I20250116 08:57:17.986106 1449 main.cc:92] Flatcar Update Engine starting Jan 16 08:57:18.023461 update_engine[1449]: I20250116 08:57:18.010058 1449 update_check_scheduler.cc:74] Next update check in 11m19s Jan 16 08:57:18.023726 tar[1452]: linux-amd64/helm Jan 16 08:57:18.009836 systemd[1]: Started update-engine.service - Update Engine. Jan 16 08:57:18.023961 jq[1466]: true Jan 16 08:57:18.020048 systemd-logind[1448]: New seat seat0. Jan 16 08:57:18.030444 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 08:57:18.032803 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 16 08:57:18.045565 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 08:57:18.045597 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Jan 16 08:57:18.045607 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 08:57:18.047352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 08:57:18.048147 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 08:57:18.048675 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 08:57:18.048703 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 08:57:18.049447 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 08:57:18.060620 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 08:57:18.089705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1377) Jan 16 08:57:18.180764 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 08:57:18.215390 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 08:57:18.215390 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 08:57:18.215390 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 08:57:18.231083 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 16 08:57:18.231083 extend-filesystems[1443]: Found vdb Jan 16 08:57:18.219021 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 08:57:18.219369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 08:57:18.242554 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:57:18.245942 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 08:57:18.269609 systemd[1]: Starting sshkeys.service... Jan 16 08:57:18.335046 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 08:57:18.346779 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 08:57:18.434408 coreos-metadata[1507]: Jan 16 08:57:18.434 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:57:18.452504 coreos-metadata[1507]: Jan 16 08:57:18.450 INFO Fetch successful Jan 16 08:57:18.475753 unknown[1507]: wrote ssh authorized keys file for user: core Jan 16 08:57:18.483141 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 08:57:18.516066 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 08:57:18.534243 update-ssh-keys[1518]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:57:18.528042 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 08:57:18.534787 systemd[1]: Finished sshkeys.service. Jan 16 08:57:18.547665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 08:57:18.565559 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 08:57:18.586972 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 08:57:18.590334 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 08:57:18.603687 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 08:57:18.629853 containerd[1468]: time="2025-01-16T08:57:18.629749282Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 16 08:57:18.651713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 08:57:18.662620 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 08:57:18.672604 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 08:57:18.673277 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 08:57:18.676283 containerd[1468]: time="2025-01-16T08:57:18.675916923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.680964 containerd[1468]: time="2025-01-16T08:57:18.680805256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:57:18.680964 containerd[1468]: time="2025-01-16T08:57:18.680844392Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 08:57:18.680964 containerd[1468]: time="2025-01-16T08:57:18.680862468Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681302747Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681330017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681462445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681476694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681655725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681669098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681683200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681692090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.681948 containerd[1468]: time="2025-01-16T08:57:18.681767856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.682213 containerd[1468]: time="2025-01-16T08:57:18.681968486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:57:18.682213 containerd[1468]: time="2025-01-16T08:57:18.682070685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:57:18.682213 containerd[1468]: time="2025-01-16T08:57:18.682082788Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 08:57:18.682213 containerd[1468]: time="2025-01-16T08:57:18.682192302Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 08:57:18.682298 containerd[1468]: time="2025-01-16T08:57:18.682283395Z" level=info msg="metadata content store policy set" policy=shared Jan 16 08:57:18.692493 containerd[1468]: time="2025-01-16T08:57:18.692428652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 08:57:18.692609 containerd[1468]: time="2025-01-16T08:57:18.692516608Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 08:57:18.692609 containerd[1468]: time="2025-01-16T08:57:18.692533912Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 08:57:18.692653 containerd[1468]: time="2025-01-16T08:57:18.692608390Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 08:57:18.692653 containerd[1468]: time="2025-01-16T08:57:18.692625339Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 08:57:18.692819 containerd[1468]: time="2025-01-16T08:57:18.692802902Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 08:57:18.693039 containerd[1468]: time="2025-01-16T08:57:18.693024645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 08:57:18.693162 containerd[1468]: time="2025-01-16T08:57:18.693145763Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 08:57:18.693218 containerd[1468]: time="2025-01-16T08:57:18.693167363Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 08:57:18.693218 containerd[1468]: time="2025-01-16T08:57:18.693212119Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 08:57:18.693260 containerd[1468]: time="2025-01-16T08:57:18.693228031Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693260 containerd[1468]: time="2025-01-16T08:57:18.693247796Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693309 containerd[1468]: time="2025-01-16T08:57:18.693259828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693309 containerd[1468]: time="2025-01-16T08:57:18.693273841Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693309 containerd[1468]: time="2025-01-16T08:57:18.693288805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693368 containerd[1468]: time="2025-01-16T08:57:18.693312975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693368 containerd[1468]: time="2025-01-16T08:57:18.693327386Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693368 containerd[1468]: time="2025-01-16T08:57:18.693338252Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 08:57:18.693368 containerd[1468]: time="2025-01-16T08:57:18.693358431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693371522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693383911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693396879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693408737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693430491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693453 containerd[1468]: time="2025-01-16T08:57:18.693443208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693455619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693479273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693499394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693515513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693532997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693548004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693581810Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 08:57:18.693617 containerd[1468]: time="2025-01-16T08:57:18.693605531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693618177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693628390Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693667790Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693684573Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693695720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693706917Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693716092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693727022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693736027Z" level=info msg="NRI interface is disabled by configuration." Jan 16 08:57:18.693791 containerd[1468]: time="2025-01-16T08:57:18.693745473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 08:57:18.694075 containerd[1468]: time="2025-01-16T08:57:18.694028576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 08:57:18.694075 containerd[1468]: time="2025-01-16T08:57:18.694075896Z" level=info msg="Connect containerd service" Jan 16 08:57:18.694265 containerd[1468]: time="2025-01-16T08:57:18.694125855Z" level=info msg="using legacy CRI server" Jan 16 08:57:18.694265 containerd[1468]: time="2025-01-16T08:57:18.694136043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 08:57:18.695198 containerd[1468]: time="2025-01-16T08:57:18.694649543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 08:57:18.695967 containerd[1468]: time="2025-01-16T08:57:18.695938838Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 08:57:18.696218 containerd[1468]: time="2025-01-16T08:57:18.696170351Z" level=info msg="Start subscribing containerd event" Jan 16 08:57:18.696317 containerd[1468]: time="2025-01-16T08:57:18.696302685Z" level=info msg="Start recovering state" Jan 16 08:57:18.696384 containerd[1468]: time="2025-01-16T08:57:18.696373665Z" level=info msg="Start event monitor" Jan 16 08:57:18.696424 containerd[1468]: time="2025-01-16T08:57:18.696389093Z" level=info msg="Start snapshots syncer" Jan 16 08:57:18.696424 containerd[1468]: time="2025-01-16T08:57:18.696398890Z" level=info msg="Start cni network conf syncer for default" Jan 16 08:57:18.696424 containerd[1468]: time="2025-01-16T08:57:18.696407363Z" level=info msg="Start streaming server" Jan 16 08:57:18.697249 containerd[1468]: time="2025-01-16T08:57:18.697227098Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 08:57:18.697367 containerd[1468]: time="2025-01-16T08:57:18.697350855Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 08:57:18.697545 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 08:57:18.698603 containerd[1468]: time="2025-01-16T08:57:18.698579855Z" level=info msg="containerd successfully booted in 0.069857s" Jan 16 08:57:18.762393 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 16 08:57:18.762998 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:18.767442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 08:57:18.770316 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 08:57:18.780650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:18.790674 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 08:57:18.854717 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 08:57:18.927221 tar[1452]: linux-amd64/LICENSE Jan 16 08:57:18.929340 tar[1452]: linux-amd64/README.md Jan 16 08:57:18.943509 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 08:57:19.209066 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 16 08:57:19.210413 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:19.637678 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 08:57:19.647215 systemd[1]: Started sshd@0-64.227.106.156:22-147.75.109.163:41284.service - OpenSSH per-connection server daemon (147.75.109.163:41284). Jan 16 08:57:19.766096 sshd[1557]: Accepted publickey for core from 147.75.109.163 port 41284 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:19.768542 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:19.783838 systemd-logind[1448]: New session 1 of user core. Jan 16 08:57:19.794252 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 08:57:19.797842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 08:57:19.822422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:19.825945 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 08:57:19.833503 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:57:19.843220 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 08:57:19.856611 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 08:57:19.870707 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 08:57:19.997355 systemd[1567]: Queued start job for default target default.target. Jan 16 08:57:20.005826 systemd[1567]: Created slice app.slice - User Application Slice. Jan 16 08:57:20.005870 systemd[1567]: Reached target paths.target - Paths. Jan 16 08:57:20.005887 systemd[1567]: Reached target timers.target - Timers. Jan 16 08:57:20.007756 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 08:57:20.039211 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 08:57:20.039452 systemd[1567]: Reached target sockets.target - Sockets. Jan 16 08:57:20.039472 systemd[1567]: Reached target basic.target - Basic System. Jan 16 08:57:20.039542 systemd[1567]: Reached target default.target - Main User Target. Jan 16 08:57:20.039589 systemd[1567]: Startup finished in 159ms. Jan 16 08:57:20.039663 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 08:57:20.048523 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 08:57:20.054666 systemd[1]: Startup finished in 1.161s (kernel) + 5.668s (initrd) + 5.680s (userspace) = 12.510s. Jan 16 08:57:20.145053 systemd[1]: Started sshd@1-64.227.106.156:22-147.75.109.163:41292.service - OpenSSH per-connection server daemon (147.75.109.163:41292). Jan 16 08:57:20.244410 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 41292 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:20.246142 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:20.255527 systemd-logind[1448]: New session 2 of user core. Jan 16 08:57:20.261644 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 08:57:20.336221 sshd[1588]: Connection closed by 147.75.109.163 port 41292 Jan 16 08:57:20.338555 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:20.350322 systemd[1]: sshd@1-64.227.106.156:22-147.75.109.163:41292.service: Deactivated successfully. Jan 16 08:57:20.353630 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 08:57:20.355289 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 16 08:57:20.373835 systemd[1]: Started sshd@2-64.227.106.156:22-147.75.109.163:41296.service - OpenSSH per-connection server daemon (147.75.109.163:41296). Jan 16 08:57:20.375040 systemd-logind[1448]: Removed session 2. Jan 16 08:57:20.444401 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 41296 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:20.446692 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:20.455158 systemd-logind[1448]: New session 3 of user core. Jan 16 08:57:20.464748 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 08:57:20.528259 sshd[1595]: Connection closed by 147.75.109.163 port 41296 Jan 16 08:57:20.527370 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:20.541005 systemd[1]: sshd@2-64.227.106.156:22-147.75.109.163:41296.service: Deactivated successfully. Jan 16 08:57:20.547583 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 08:57:20.550742 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 16 08:57:20.560697 systemd[1]: Started sshd@3-64.227.106.156:22-147.75.109.163:41304.service - OpenSSH per-connection server daemon (147.75.109.163:41304). Jan 16 08:57:20.567905 systemd-logind[1448]: Removed session 3. Jan 16 08:57:20.629806 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 41304 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:20.632423 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:20.641026 systemd-logind[1448]: New session 4 of user core. Jan 16 08:57:20.645420 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 08:57:20.717345 sshd[1603]: Connection closed by 147.75.109.163 port 41304 Jan 16 08:57:20.714167 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:20.725592 systemd[1]: sshd@3-64.227.106.156:22-147.75.109.163:41304.service: Deactivated successfully. Jan 16 08:57:20.729291 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 08:57:20.731476 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 16 08:57:20.741775 systemd[1]: Started sshd@4-64.227.106.156:22-147.75.109.163:41316.service - OpenSSH per-connection server daemon (147.75.109.163:41316). Jan 16 08:57:20.745071 systemd-logind[1448]: Removed session 4. Jan 16 08:57:20.797296 kubelet[1565]: E0116 08:57:20.794244 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:57:20.800633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:57:20.800872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:57:20.801597 systemd[1]: kubelet.service: Consumed 1.190s CPU time. Jan 16 08:57:20.806075 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 41316 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:20.808514 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:20.814988 systemd-logind[1448]: New session 5 of user core. Jan 16 08:57:20.827696 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 08:57:20.908211 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 08:57:20.908732 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:57:20.925600 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 16 08:57:20.932220 sshd[1612]: Connection closed by 147.75.109.163 port 41316 Jan 16 08:57:20.931155 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:20.944989 systemd[1]: sshd@4-64.227.106.156:22-147.75.109.163:41316.service: Deactivated successfully. Jan 16 08:57:20.948667 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 08:57:20.951518 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 16 08:57:20.958816 systemd[1]: Started sshd@5-64.227.106.156:22-147.75.109.163:41320.service - OpenSSH per-connection server daemon (147.75.109.163:41320). Jan 16 08:57:20.961650 systemd-logind[1448]: Removed session 5. Jan 16 08:57:21.036924 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 41320 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:21.040087 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:21.049654 systemd-logind[1448]: New session 6 of user core. Jan 16 08:57:21.056605 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 08:57:21.123558 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 08:57:21.124663 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:57:21.130693 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 16 08:57:21.139998 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 08:57:21.140486 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:57:21.161905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 08:57:21.218892 augenrules[1644]: No rules Jan 16 08:57:21.221384 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:57:21.221685 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 08:57:21.224765 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 16 08:57:21.228762 sshd[1620]: Connection closed by 147.75.109.163 port 41320 Jan 16 08:57:21.229785 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:21.240762 systemd[1]: sshd@5-64.227.106.156:22-147.75.109.163:41320.service: Deactivated successfully. Jan 16 08:57:21.244400 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 08:57:21.247406 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 16 08:57:21.252744 systemd[1]: Started sshd@6-64.227.106.156:22-147.75.109.163:41326.service - OpenSSH per-connection server daemon (147.75.109.163:41326). Jan 16 08:57:21.255051 systemd-logind[1448]: Removed session 6. Jan 16 08:57:21.325782 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 41326 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:57:21.328330 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:57:21.338437 systemd-logind[1448]: New session 7 of user core. Jan 16 08:57:21.348638 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 08:57:21.412621 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 08:57:21.413018 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:57:22.021726 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 08:57:22.024125 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 08:57:22.409718 dockerd[1672]: time="2025-01-16T08:57:22.409534941Z" level=info msg="Starting up" Jan 16 08:57:22.682517 dockerd[1672]: time="2025-01-16T08:57:22.682047529Z" level=info msg="Loading containers: start." Jan 16 08:57:22.902654 kernel: Initializing XFRM netlink socket Jan 16 08:57:22.944679 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:22.946121 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:22.957791 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:23.022069 systemd-networkd[1376]: docker0: Link UP Jan 16 08:57:23.022823 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Jan 16 08:57:23.073377 dockerd[1672]: time="2025-01-16T08:57:23.073315468Z" level=info msg="Loading containers: done." Jan 16 08:57:23.099461 dockerd[1672]: time="2025-01-16T08:57:23.099377973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 08:57:23.099727 dockerd[1672]: time="2025-01-16T08:57:23.099518756Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 16 08:57:23.099727 dockerd[1672]: time="2025-01-16T08:57:23.099683603Z" level=info msg="Daemon has completed initialization" Jan 16 08:57:23.163875 dockerd[1672]: time="2025-01-16T08:57:23.163648556Z" level=info msg="API listen on /run/docker.sock" Jan 16 08:57:23.166311 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 08:57:24.108925 containerd[1468]: time="2025-01-16T08:57:24.108877769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 16 08:57:24.745092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609471573.mount: Deactivated successfully. Jan 16 08:57:26.152993 containerd[1468]: time="2025-01-16T08:57:26.152767761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:26.154395 containerd[1468]: time="2025-01-16T08:57:26.154128209Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 16 08:57:26.157229 containerd[1468]: time="2025-01-16T08:57:26.155249967Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:26.159545 containerd[1468]: time="2025-01-16T08:57:26.159487703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:26.161326 containerd[1468]: time="2025-01-16T08:57:26.161269587Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.052344953s" Jan 16 08:57:26.161530 containerd[1468]: time="2025-01-16T08:57:26.161507201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 16 08:57:26.164929 containerd[1468]: time="2025-01-16T08:57:26.164823926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 16 08:57:27.874576 containerd[1468]: time="2025-01-16T08:57:27.874487405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:27.876657 containerd[1468]: time="2025-01-16T08:57:27.876201362Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 16 08:57:27.877780 containerd[1468]: time="2025-01-16T08:57:27.877735091Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:27.886433 containerd[1468]: time="2025-01-16T08:57:27.886344459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:27.887821 containerd[1468]: time="2025-01-16T08:57:27.887249579Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.722351834s" Jan 16 08:57:27.887821 containerd[1468]: time="2025-01-16T08:57:27.887305122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 16 08:57:27.888619 containerd[1468]: time="2025-01-16T08:57:27.888444567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 16 08:57:29.327234 containerd[1468]: time="2025-01-16T08:57:29.326712244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:29.329542 containerd[1468]: time="2025-01-16T08:57:29.329454026Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 16 08:57:29.329954 containerd[1468]: time="2025-01-16T08:57:29.329768847Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:29.336425 containerd[1468]: time="2025-01-16T08:57:29.336321832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:29.339033 containerd[1468]: time="2025-01-16T08:57:29.338389856Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.449893062s" Jan 16 08:57:29.339033 containerd[1468]: time="2025-01-16T08:57:29.338455955Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 16 08:57:29.339974 containerd[1468]: time="2025-01-16T08:57:29.339565055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 16 08:57:30.527058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964955621.mount: Deactivated successfully. Jan 16 08:57:31.051916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 08:57:31.068352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:31.265318 containerd[1468]: time="2025-01-16T08:57:31.264992842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:31.267426 containerd[1468]: time="2025-01-16T08:57:31.267357313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 16 08:57:31.269992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:31.271290 containerd[1468]: time="2025-01-16T08:57:31.271210329Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:31.274768 containerd[1468]: time="2025-01-16T08:57:31.274717376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:31.275798 containerd[1468]: time="2025-01-16T08:57:31.275756986Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.936154542s" Jan 16 08:57:31.275798 containerd[1468]: time="2025-01-16T08:57:31.275797585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 16 08:57:31.279050 containerd[1468]: time="2025-01-16T08:57:31.278981824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 08:57:31.282065 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 08:57:31.290692 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:57:31.386694 kubelet[1943]: E0116 08:57:31.386447 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:57:31.393936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:57:31.394241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:57:31.895210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039310518.mount: Deactivated successfully. Jan 16 08:57:33.039744 containerd[1468]: time="2025-01-16T08:57:33.039678413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.041245 containerd[1468]: time="2025-01-16T08:57:33.040919624Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 08:57:33.042236 containerd[1468]: time="2025-01-16T08:57:33.041647376Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.045267 containerd[1468]: time="2025-01-16T08:57:33.045221358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.046709 containerd[1468]: time="2025-01-16T08:57:33.046664928Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.766346897s" Jan 16 08:57:33.046709 containerd[1468]: time="2025-01-16T08:57:33.046704599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 08:57:33.047441 containerd[1468]: time="2025-01-16T08:57:33.047392747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 08:57:33.556501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538466382.mount: Deactivated successfully. Jan 16 08:57:33.567450 containerd[1468]: time="2025-01-16T08:57:33.566590570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.568968 containerd[1468]: time="2025-01-16T08:57:33.568873454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 16 08:57:33.570089 containerd[1468]: time="2025-01-16T08:57:33.570054356Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.572692 containerd[1468]: time="2025-01-16T08:57:33.572649376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:33.574228 containerd[1468]: time="2025-01-16T08:57:33.573761770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.333704ms" Jan 16 08:57:33.574228 containerd[1468]: time="2025-01-16T08:57:33.573803349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 16 08:57:33.574406 containerd[1468]: time="2025-01-16T08:57:33.574367025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 16 08:57:34.128529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197345862.mount: Deactivated successfully. Jan 16 08:57:34.378308 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 08:57:36.158533 containerd[1468]: time="2025-01-16T08:57:36.158466531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:36.160835 containerd[1468]: time="2025-01-16T08:57:36.160753948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 16 08:57:36.162643 containerd[1468]: time="2025-01-16T08:57:36.162573800Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:36.166703 containerd[1468]: time="2025-01-16T08:57:36.166634348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:36.168406 containerd[1468]: time="2025-01-16T08:57:36.168204439Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.593780512s" Jan 16 08:57:36.168406 containerd[1468]: time="2025-01-16T08:57:36.168254858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 16 08:57:39.938203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:39.945602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:39.986156 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-7.scope)... Jan 16 08:57:39.986199 systemd[1]: Reloading... Jan 16 08:57:40.130309 zram_generator::config[2116]: No configuration found. Jan 16 08:57:40.256184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:40.334761 systemd[1]: Reloading finished in 347 ms. Jan 16 08:57:40.393759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:40.398237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:40.400714 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 08:57:40.401077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:40.405997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:40.550390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:40.560683 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:57:40.621011 kubelet[2172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:57:40.621011 kubelet[2172]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:57:40.621011 kubelet[2172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:57:40.622414 kubelet[2172]: I0116 08:57:40.622324 2172 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:57:40.986512 kubelet[2172]: I0116 08:57:40.986365 2172 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 16 08:57:40.987234 kubelet[2172]: I0116 08:57:40.986682 2172 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:57:40.987234 kubelet[2172]: I0116 08:57:40.987084 2172 server.go:929] "Client rotation is on, will bootstrap in background" Jan 16 08:57:41.013940 kubelet[2172]: I0116 08:57:41.013750 2172 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:57:41.014137 kubelet[2172]: E0116 08:57:41.014098 2172 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.106.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:41.025473 kubelet[2172]: E0116 08:57:41.025312 2172 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 08:57:41.025473 kubelet[2172]: I0116 08:57:41.025354 2172 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 08:57:41.030369 kubelet[2172]: I0116 08:57:41.030336 2172 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:57:41.030762 kubelet[2172]: I0116 08:57:41.030664 2172 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 16 08:57:41.031228 kubelet[2172]: I0116 08:57:41.030945 2172 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:57:41.031228 kubelet[2172]: I0116 08:57:41.030987 2172 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-e-9b059e58c2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 08:57:41.031452 kubelet[2172]: I0116 08:57:41.031435 2172 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:57:41.031502 kubelet[2172]: I0116 08:57:41.031495 2172 container_manager_linux.go:300] "Creating device plugin manager" Jan 16 08:57:41.031699 kubelet[2172]: I0116 08:57:41.031669 2172 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:57:41.034108 kubelet[2172]: I0116 08:57:41.033888 2172 kubelet.go:408] "Attempting to sync node with API server" Jan 16 08:57:41.034108 kubelet[2172]: I0116 08:57:41.033930 2172 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:57:41.034108 kubelet[2172]: I0116 08:57:41.033982 2172 kubelet.go:314] "Adding apiserver pod source" Jan 16 08:57:41.034108 kubelet[2172]: I0116 08:57:41.034007 2172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:57:41.041161 kubelet[2172]: W0116 08:57:41.041078 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.106.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-9b059e58c2&limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:41.041346 kubelet[2172]: E0116 08:57:41.041170 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.106.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-9b059e58c2&limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:41.043236 kubelet[2172]: W0116 08:57:41.042829 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.106.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:41.043236 kubelet[2172]: E0116 08:57:41.042911 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.106.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:41.043236 kubelet[2172]: I0116 08:57:41.043048 2172 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 16 08:57:41.045410 kubelet[2172]: I0116 08:57:41.045377 2172 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:57:41.047027 kubelet[2172]: W0116 08:57:41.046298 2172 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 08:57:41.048303 kubelet[2172]: I0116 08:57:41.047406 2172 server.go:1269] "Started kubelet" Jan 16 08:57:41.048629 kubelet[2172]: I0116 08:57:41.048579 2172 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:57:41.057527 kubelet[2172]: I0116 08:57:41.057428 2172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:57:41.060202 kubelet[2172]: I0116 08:57:41.059196 2172 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:57:41.060202 kubelet[2172]: I0116 08:57:41.059395 2172 server.go:460] "Adding debug handlers to kubelet server" Jan 16 08:57:41.064775 kubelet[2172]: E0116 08:57:41.062051 2172 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.106.156:6443/api/v1/namespaces/default/events\": dial tcp 64.227.106.156:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-e-9b059e58c2.181b208aa03cb0ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-9b059e58c2,UID:ci-4152.2.0-e-9b059e58c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-9b059e58c2,},FirstTimestamp:2025-01-16 08:57:41.047365869 +0000 UTC m=+0.481497607,LastTimestamp:2025-01-16 08:57:41.047365869 +0000 UTC m=+0.481497607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-9b059e58c2,}" Jan 16 08:57:41.066261 kubelet[2172]: I0116 08:57:41.066147 2172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:57:41.073206 kubelet[2172]: E0116 08:57:41.072310 2172 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:57:41.073206 kubelet[2172]: I0116 08:57:41.072497 2172 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 16 08:57:41.073206 kubelet[2172]: I0116 08:57:41.066816 2172 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 08:57:41.073206 kubelet[2172]: I0116 08:57:41.072885 2172 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 08:57:41.073206 kubelet[2172]: I0116 08:57:41.072955 2172 reconciler.go:26] "Reconciler: start to sync state" Jan 16 08:57:41.074495 kubelet[2172]: W0116 08:57:41.074443 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.106.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:41.074623 kubelet[2172]: E0116 08:57:41.074610 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.106.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:41.075365 kubelet[2172]: E0116 08:57:41.075344 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:41.076063 kubelet[2172]: I0116 08:57:41.076037 2172 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:57:41.076346 kubelet[2172]: I0116 08:57:41.076323 2172 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:57:41.077558 kubelet[2172]: E0116 08:57:41.077506 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.106.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-9b059e58c2?timeout=10s\": dial tcp 64.227.106.156:6443: connect: connection refused" interval="200ms" Jan 16 08:57:41.078852 kubelet[2172]: I0116 08:57:41.078605 2172 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:57:41.093907 kubelet[2172]: I0116 08:57:41.093859 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:57:41.095889 kubelet[2172]: I0116 08:57:41.095413 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:57:41.095889 kubelet[2172]: I0116 08:57:41.095462 2172 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:57:41.095889 kubelet[2172]: I0116 08:57:41.095485 2172 kubelet.go:2321] "Starting kubelet main sync loop" Jan 16 08:57:41.095889 kubelet[2172]: E0116 08:57:41.095539 2172 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:57:41.102628 kubelet[2172]: W0116 08:57:41.102538 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.106.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:41.102895 kubelet[2172]: E0116 08:57:41.102852 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.106.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:41.110082 kubelet[2172]: I0116 08:57:41.110041 2172 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:57:41.110082 kubelet[2172]: I0116 08:57:41.110070 2172 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:57:41.110082 kubelet[2172]: I0116 08:57:41.110097 2172 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:57:41.115925 kubelet[2172]: I0116 08:57:41.115656 2172 policy_none.go:49] "None policy: Start" Jan 16 08:57:41.117217 kubelet[2172]: I0116 08:57:41.117165 2172 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:57:41.117464 kubelet[2172]: I0116 08:57:41.117307 2172 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:57:41.129008 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 08:57:41.146474 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 08:57:41.150877 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 08:57:41.164436 kubelet[2172]: I0116 08:57:41.162309 2172 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:57:41.167069 kubelet[2172]: I0116 08:57:41.167036 2172 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 08:57:41.168109 kubelet[2172]: I0116 08:57:41.167064 2172 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 08:57:41.168292 kubelet[2172]: I0116 08:57:41.168169 2172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:57:41.169343 kubelet[2172]: E0116 08:57:41.169239 2172 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:41.208103 systemd[1]: Created slice kubepods-burstable-pod46ea317bf8af53ddb3977b1ddcbf21f8.slice - libcontainer container kubepods-burstable-pod46ea317bf8af53ddb3977b1ddcbf21f8.slice. Jan 16 08:57:41.216515 systemd[1]: Created slice kubepods-burstable-pod32e37c008c8a171c8d22ac43a93521c4.slice - libcontainer container kubepods-burstable-pod32e37c008c8a171c8d22ac43a93521c4.slice. Jan 16 08:57:41.223305 systemd[1]: Created slice kubepods-burstable-pod5b1a2a8dea19dbbcf40395d32c7855e4.slice - libcontainer container kubepods-burstable-pod5b1a2a8dea19dbbcf40395d32c7855e4.slice. Jan 16 08:57:41.272139 kubelet[2172]: I0116 08:57:41.271345 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.272139 kubelet[2172]: E0116 08:57:41.271907 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.106.156:6443/api/v1/nodes\": dial tcp 64.227.106.156:6443: connect: connection refused" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274005 kubelet[2172]: I0116 08:57:41.273815 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274005 kubelet[2172]: I0116 08:57:41.273847 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274005 kubelet[2172]: I0116 08:57:41.273868 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274005 kubelet[2172]: I0116 08:57:41.273888 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274005 kubelet[2172]: I0116 08:57:41.273905 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274245 kubelet[2172]: I0116 08:57:41.273921 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274245 kubelet[2172]: I0116 08:57:41.273936 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274245 kubelet[2172]: I0116 08:57:41.273951 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.274245 kubelet[2172]: I0116 08:57:41.273966 2172 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46ea317bf8af53ddb3977b1ddcbf21f8-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-e-9b059e58c2\" (UID: \"46ea317bf8af53ddb3977b1ddcbf21f8\") " pod="kube-system/kube-scheduler-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.278119 kubelet[2172]: E0116 08:57:41.278056 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.106.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-9b059e58c2?timeout=10s\": dial tcp 64.227.106.156:6443: connect: connection refused" interval="400ms" Jan 16 08:57:41.473829 kubelet[2172]: I0116 08:57:41.473779 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.474261 kubelet[2172]: E0116 08:57:41.474225 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.106.156:6443/api/v1/nodes\": dial tcp 64.227.106.156:6443: connect: connection refused" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.514128 kubelet[2172]: E0116 08:57:41.514019 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:41.517380 containerd[1468]: time="2025-01-16T08:57:41.517316192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-e-9b059e58c2,Uid:46ea317bf8af53ddb3977b1ddcbf21f8,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:41.519945 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 08:57:41.521079 kubelet[2172]: E0116 08:57:41.520495 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:41.521155 containerd[1468]: time="2025-01-16T08:57:41.520987347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-e-9b059e58c2,Uid:32e37c008c8a171c8d22ac43a93521c4,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:41.527228 kubelet[2172]: E0116 08:57:41.527057 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:41.528083 containerd[1468]: time="2025-01-16T08:57:41.527662818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-e-9b059e58c2,Uid:5b1a2a8dea19dbbcf40395d32c7855e4,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:41.679352 kubelet[2172]: E0116 08:57:41.679260 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.106.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-9b059e58c2?timeout=10s\": dial tcp 64.227.106.156:6443: connect: connection refused" interval="800ms" Jan 16 08:57:41.876027 kubelet[2172]: I0116 08:57:41.875858 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.876633 kubelet[2172]: E0116 08:57:41.876560 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.106.156:6443/api/v1/nodes\": dial tcp 64.227.106.156:6443: connect: connection refused" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:41.964049 kubelet[2172]: W0116 08:57:41.963963 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.106.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:41.964283 kubelet[2172]: E0116 08:57:41.964080 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.106.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:42.041675 kubelet[2172]: W0116 08:57:42.040674 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.106.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:42.041675 kubelet[2172]: E0116 08:57:42.041673 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.106.156:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:42.103153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340378743.mount: Deactivated successfully. Jan 16 08:57:42.112255 containerd[1468]: time="2025-01-16T08:57:42.112094999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:57:42.116883 containerd[1468]: time="2025-01-16T08:57:42.116567811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 08:57:42.117912 containerd[1468]: time="2025-01-16T08:57:42.117830228Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:57:42.119090 containerd[1468]: time="2025-01-16T08:57:42.119039353Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:57:42.123076 containerd[1468]: time="2025-01-16T08:57:42.122297918Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:57:42.124298 containerd[1468]: time="2025-01-16T08:57:42.124239135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:57:42.126375 containerd[1468]: time="2025-01-16T08:57:42.126218901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:57:42.127357 containerd[1468]: time="2025-01-16T08:57:42.127096864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:57:42.130118 containerd[1468]: time="2025-01-16T08:57:42.129823354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.743514ms" Jan 16 08:57:42.134112 containerd[1468]: time="2025-01-16T08:57:42.134049332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.578912ms" Jan 16 08:57:42.136463 containerd[1468]: time="2025-01-16T08:57:42.136078039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 608.139952ms" Jan 16 08:57:42.321577 containerd[1468]: time="2025-01-16T08:57:42.321452475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:42.323142 containerd[1468]: time="2025-01-16T08:57:42.322931277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:42.323690 containerd[1468]: time="2025-01-16T08:57:42.323420795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.324468 containerd[1468]: time="2025-01-16T08:57:42.324413881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.327037 containerd[1468]: time="2025-01-16T08:57:42.326932115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:42.328573 containerd[1468]: time="2025-01-16T08:57:42.328368601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:42.328573 containerd[1468]: time="2025-01-16T08:57:42.328389151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.328573 containerd[1468]: time="2025-01-16T08:57:42.328485977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.328860 containerd[1468]: time="2025-01-16T08:57:42.327934834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:42.328860 containerd[1468]: time="2025-01-16T08:57:42.328005719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:42.328860 containerd[1468]: time="2025-01-16T08:57:42.328022127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.328860 containerd[1468]: time="2025-01-16T08:57:42.328118136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:42.353495 systemd[1]: Started cri-containerd-077f17f5b5b93b3130528c87b1de218916dc8071ccc3dde32a2a7a9f51a2071b.scope - libcontainer container 077f17f5b5b93b3130528c87b1de218916dc8071ccc3dde32a2a7a9f51a2071b. Jan 16 08:57:42.376472 systemd[1]: Started cri-containerd-077085b9d5093444acb868a033783249657432832b9a1a3e7eac7d6b268c89cd.scope - libcontainer container 077085b9d5093444acb868a033783249657432832b9a1a3e7eac7d6b268c89cd. Jan 16 08:57:42.387164 systemd[1]: Started cri-containerd-65289f33bfaea8a256fd1bbc8ec2ab27567a53373bbb974f026d4fa968128592.scope - libcontainer container 65289f33bfaea8a256fd1bbc8ec2ab27567a53373bbb974f026d4fa968128592. Jan 16 08:57:42.455328 containerd[1468]: time="2025-01-16T08:57:42.455275115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-e-9b059e58c2,Uid:5b1a2a8dea19dbbcf40395d32c7855e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"077f17f5b5b93b3130528c87b1de218916dc8071ccc3dde32a2a7a9f51a2071b\"" Jan 16 08:57:42.458591 kubelet[2172]: E0116 08:57:42.458525 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:42.466564 containerd[1468]: time="2025-01-16T08:57:42.466362337Z" level=info msg="CreateContainer within sandbox \"077f17f5b5b93b3130528c87b1de218916dc8071ccc3dde32a2a7a9f51a2071b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 08:57:42.481232 kubelet[2172]: E0116 08:57:42.480098 2172 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.106.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-e-9b059e58c2?timeout=10s\": dial tcp 64.227.106.156:6443: connect: connection refused" interval="1.6s" Jan 16 08:57:42.498589 containerd[1468]: time="2025-01-16T08:57:42.498316777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-e-9b059e58c2,Uid:32e37c008c8a171c8d22ac43a93521c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"077085b9d5093444acb868a033783249657432832b9a1a3e7eac7d6b268c89cd\"" Jan 16 08:57:42.501148 kubelet[2172]: E0116 08:57:42.500947 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:42.504053 containerd[1468]: time="2025-01-16T08:57:42.503937351Z" level=info msg="CreateContainer within sandbox \"077085b9d5093444acb868a033783249657432832b9a1a3e7eac7d6b268c89cd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 08:57:42.511595 containerd[1468]: time="2025-01-16T08:57:42.511537589Z" level=info msg="CreateContainer within sandbox \"077f17f5b5b93b3130528c87b1de218916dc8071ccc3dde32a2a7a9f51a2071b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"617140cca988e5779511813b9b778ab7a59c070f615c20ef19375794ed9c845e\"" Jan 16 08:57:42.513783 containerd[1468]: time="2025-01-16T08:57:42.513379865Z" level=info msg="StartContainer for \"617140cca988e5779511813b9b778ab7a59c070f615c20ef19375794ed9c845e\"" Jan 16 08:57:42.518137 containerd[1468]: time="2025-01-16T08:57:42.518087010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-e-9b059e58c2,Uid:46ea317bf8af53ddb3977b1ddcbf21f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"65289f33bfaea8a256fd1bbc8ec2ab27567a53373bbb974f026d4fa968128592\"" Jan 16 08:57:42.521072 kubelet[2172]: E0116 08:57:42.520863 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:42.523137 kubelet[2172]: W0116 08:57:42.522220 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.106.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-9b059e58c2&limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:42.523137 kubelet[2172]: E0116 08:57:42.522319 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.106.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-e-9b059e58c2&limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:42.525538 containerd[1468]: time="2025-01-16T08:57:42.525477787Z" level=info msg="CreateContainer within sandbox \"65289f33bfaea8a256fd1bbc8ec2ab27567a53373bbb974f026d4fa968128592\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 08:57:42.536371 kubelet[2172]: W0116 08:57:42.536275 2172 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.106.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.106.156:6443: connect: connection refused Jan 16 08:57:42.536771 kubelet[2172]: E0116 08:57:42.536718 2172 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.106.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.106.156:6443: connect: connection refused" logger="UnhandledError" Jan 16 08:57:42.540595 containerd[1468]: time="2025-01-16T08:57:42.540532522Z" level=info msg="CreateContainer within sandbox \"077085b9d5093444acb868a033783249657432832b9a1a3e7eac7d6b268c89cd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38bc10b87a44b90c48b6c617afe0d19a87c7155985712efe873ba4702588baef\"" Jan 16 08:57:42.541453 containerd[1468]: time="2025-01-16T08:57:42.541416386Z" level=info msg="StartContainer for \"38bc10b87a44b90c48b6c617afe0d19a87c7155985712efe873ba4702588baef\"" Jan 16 08:57:42.549420 containerd[1468]: time="2025-01-16T08:57:42.549362112Z" level=info msg="CreateContainer within sandbox \"65289f33bfaea8a256fd1bbc8ec2ab27567a53373bbb974f026d4fa968128592\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4863e3a229ddc6f44dfa4d91a2e759a497bdb5dee0762f4cd0efc4e32d5bae90\"" Jan 16 08:57:42.551213 containerd[1468]: time="2025-01-16T08:57:42.550102058Z" level=info msg="StartContainer for \"4863e3a229ddc6f44dfa4d91a2e759a497bdb5dee0762f4cd0efc4e32d5bae90\"" Jan 16 08:57:42.579447 systemd[1]: Started cri-containerd-617140cca988e5779511813b9b778ab7a59c070f615c20ef19375794ed9c845e.scope - libcontainer container 617140cca988e5779511813b9b778ab7a59c070f615c20ef19375794ed9c845e. Jan 16 08:57:42.598518 systemd[1]: Started cri-containerd-38bc10b87a44b90c48b6c617afe0d19a87c7155985712efe873ba4702588baef.scope - libcontainer container 38bc10b87a44b90c48b6c617afe0d19a87c7155985712efe873ba4702588baef. Jan 16 08:57:42.615512 systemd[1]: Started cri-containerd-4863e3a229ddc6f44dfa4d91a2e759a497bdb5dee0762f4cd0efc4e32d5bae90.scope - libcontainer container 4863e3a229ddc6f44dfa4d91a2e759a497bdb5dee0762f4cd0efc4e32d5bae90. Jan 16 08:57:42.681464 kubelet[2172]: I0116 08:57:42.679199 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:42.681464 kubelet[2172]: E0116 08:57:42.681384 2172 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.106.156:6443/api/v1/nodes\": dial tcp 64.227.106.156:6443: connect: connection refused" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:42.697700 containerd[1468]: time="2025-01-16T08:57:42.697656175Z" level=info msg="StartContainer for \"38bc10b87a44b90c48b6c617afe0d19a87c7155985712efe873ba4702588baef\" returns successfully" Jan 16 08:57:42.711598 containerd[1468]: time="2025-01-16T08:57:42.711550078Z" level=info msg="StartContainer for \"617140cca988e5779511813b9b778ab7a59c070f615c20ef19375794ed9c845e\" returns successfully" Jan 16 08:57:42.727235 containerd[1468]: time="2025-01-16T08:57:42.726725001Z" level=info msg="StartContainer for \"4863e3a229ddc6f44dfa4d91a2e759a497bdb5dee0762f4cd0efc4e32d5bae90\" returns successfully" Jan 16 08:57:43.113467 kubelet[2172]: E0116 08:57:43.113357 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:43.118856 kubelet[2172]: E0116 08:57:43.118605 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:43.120589 kubelet[2172]: E0116 08:57:43.120503 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:44.125441 kubelet[2172]: E0116 08:57:44.125334 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:44.283631 kubelet[2172]: I0116 08:57:44.282932 2172 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:44.306932 kubelet[2172]: E0116 08:57:44.306768 2172 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:44.778785 kubelet[2172]: E0116 08:57:44.778584 2172 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-e-9b059e58c2\" not found" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:44.836661 kubelet[2172]: I0116 08:57:44.836608 2172 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:44.836661 kubelet[2172]: E0116 08:57:44.836672 2172 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152.2.0-e-9b059e58c2\": node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:44.857206 kubelet[2172]: E0116 08:57:44.856049 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:44.887262 kubelet[2172]: E0116 08:57:44.886860 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-e-9b059e58c2.181b208aa03cb0ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-9b059e58c2,UID:ci-4152.2.0-e-9b059e58c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-9b059e58c2,},FirstTimestamp:2025-01-16 08:57:41.047365869 +0000 UTC m=+0.481497607,LastTimestamp:2025-01-16 08:57:41.047365869 +0000 UTC m=+0.481497607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-9b059e58c2,}" Jan 16 08:57:44.943167 kubelet[2172]: E0116 08:57:44.943006 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-e-9b059e58c2.181b208aa1b8f2c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-9b059e58c2,UID:ci-4152.2.0-e-9b059e58c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-9b059e58c2,},FirstTimestamp:2025-01-16 08:57:41.072286401 +0000 UTC m=+0.506418140,LastTimestamp:2025-01-16 08:57:41.072286401 +0000 UTC m=+0.506418140,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-9b059e58c2,}" Jan 16 08:57:44.956708 kubelet[2172]: E0116 08:57:44.956661 2172 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:45.002240 kubelet[2172]: E0116 08:57:45.002098 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-e-9b059e58c2.181b208aa3eb4897 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-9b059e58c2,UID:ci-4152.2.0-e-9b059e58c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4152.2.0-e-9b059e58c2 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-9b059e58c2,},FirstTimestamp:2025-01-16 08:57:41.109139607 +0000 UTC m=+0.543271353,LastTimestamp:2025-01-16 08:57:41.109139607 +0000 UTC m=+0.543271353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-9b059e58c2,}" Jan 16 08:57:45.045523 kubelet[2172]: I0116 08:57:45.043514 2172 apiserver.go:52] "Watching apiserver" Jan 16 08:57:45.057425 kubelet[2172]: E0116 08:57:45.057068 2172 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152.2.0-e-9b059e58c2.181b208aa3eba7f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-e-9b059e58c2,UID:ci-4152.2.0-e-9b059e58c2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4152.2.0-e-9b059e58c2 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-e-9b059e58c2,},FirstTimestamp:2025-01-16 08:57:41.109164019 +0000 UTC m=+0.543295757,LastTimestamp:2025-01-16 08:57:41.109164019 +0000 UTC m=+0.543295757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-e-9b059e58c2,}" Jan 16 08:57:45.074067 kubelet[2172]: I0116 08:57:45.073982 2172 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 08:57:47.032839 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Jan 16 08:57:47.032871 systemd[1]: Reloading... Jan 16 08:57:47.137240 zram_generator::config[2482]: No configuration found. Jan 16 08:57:47.313429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:57:47.426395 systemd[1]: Reloading finished in 392 ms. Jan 16 08:57:47.479048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:47.489006 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 08:57:47.489350 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:47.496716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:57:47.649009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:57:47.665849 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:57:47.766210 kubelet[2537]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:57:47.766210 kubelet[2537]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:57:47.766210 kubelet[2537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:57:47.766210 kubelet[2537]: I0116 08:57:47.764817 2537 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:57:47.778920 kubelet[2537]: I0116 08:57:47.778831 2537 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 16 08:57:47.778920 kubelet[2537]: I0116 08:57:47.778902 2537 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:57:47.779415 kubelet[2537]: I0116 08:57:47.779377 2537 server.go:929] "Client rotation is on, will bootstrap in background" Jan 16 08:57:47.783214 kubelet[2537]: I0116 08:57:47.782332 2537 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 08:57:47.785753 kubelet[2537]: I0116 08:57:47.785705 2537 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:57:47.795042 kubelet[2537]: E0116 08:57:47.794031 2537 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 08:57:47.795042 kubelet[2537]: I0116 08:57:47.794218 2537 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 08:57:47.798906 kubelet[2537]: I0116 08:57:47.798865 2537 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:57:47.799385 kubelet[2537]: I0116 08:57:47.799363 2537 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 16 08:57:47.799734 kubelet[2537]: I0116 08:57:47.799687 2537 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:57:47.800118 kubelet[2537]: I0116 08:57:47.799833 2537 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-e-9b059e58c2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 08:57:47.800348 kubelet[2537]: I0116 08:57:47.800331 2537 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:57:47.800443 kubelet[2537]: I0116 08:57:47.800432 2537 container_manager_linux.go:300] "Creating device plugin manager" Jan 16 08:57:47.800565 kubelet[2537]: I0116 08:57:47.800554 2537 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:57:47.800811 kubelet[2537]: I0116 08:57:47.800797 2537 kubelet.go:408] "Attempting to sync node with API server" Jan 16 08:57:47.800909 kubelet[2537]: I0116 08:57:47.800898 2537 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:57:47.801010 kubelet[2537]: I0116 08:57:47.800999 2537 kubelet.go:314] "Adding apiserver pod source" Jan 16 08:57:47.801089 kubelet[2537]: I0116 08:57:47.801079 2537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:57:47.814519 kubelet[2537]: I0116 08:57:47.814462 2537 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 16 08:57:47.815857 kubelet[2537]: I0116 08:57:47.815818 2537 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:57:47.816775 kubelet[2537]: I0116 08:57:47.816750 2537 server.go:1269] "Started kubelet" Jan 16 08:57:47.820583 kubelet[2537]: I0116 08:57:47.820549 2537 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:57:47.831272 kubelet[2537]: I0116 08:57:47.831151 2537 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:57:47.833261 kubelet[2537]: I0116 08:57:47.832648 2537 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:57:47.833261 kubelet[2537]: I0116 08:57:47.833056 2537 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:57:47.834389 kubelet[2537]: I0116 08:57:47.834360 2537 server.go:460] "Adding debug handlers to kubelet server" Jan 16 08:57:47.837208 kubelet[2537]: I0116 08:57:47.835439 2537 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 08:57:47.837997 kubelet[2537]: I0116 08:57:47.837968 2537 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 16 08:57:47.838582 kubelet[2537]: E0116 08:57:47.838547 2537 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.2.0-e-9b059e58c2\" not found" Jan 16 08:57:47.849604 kubelet[2537]: I0116 08:57:47.849525 2537 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:57:47.849793 kubelet[2537]: I0116 08:57:47.849679 2537 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:57:47.852495 kubelet[2537]: I0116 08:57:47.851914 2537 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 16 08:57:47.852495 kubelet[2537]: I0116 08:57:47.852167 2537 reconciler.go:26] "Reconciler: start to sync state" Jan 16 08:57:47.857747 kubelet[2537]: I0116 08:57:47.857700 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:57:47.863071 kubelet[2537]: I0116 08:57:47.863028 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:57:47.863384 kubelet[2537]: I0116 08:57:47.863367 2537 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:57:47.863506 kubelet[2537]: I0116 08:57:47.863495 2537 kubelet.go:2321] "Starting kubelet main sync loop" Jan 16 08:57:47.864066 kubelet[2537]: E0116 08:57:47.863658 2537 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:57:47.867229 kubelet[2537]: I0116 08:57:47.866371 2537 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:57:47.958992 kubelet[2537]: I0116 08:57:47.958846 2537 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:57:47.959220 kubelet[2537]: I0116 08:57:47.959187 2537 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:57:47.959923 kubelet[2537]: I0116 08:57:47.959899 2537 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:57:47.960476 kubelet[2537]: I0116 08:57:47.960433 2537 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 08:57:47.960602 kubelet[2537]: I0116 08:57:47.960575 2537 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 08:57:47.960701 kubelet[2537]: I0116 08:57:47.960692 2537 policy_none.go:49] "None policy: Start" Jan 16 08:57:47.962015 kubelet[2537]: I0116 08:57:47.961996 2537 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:57:47.962268 kubelet[2537]: I0116 08:57:47.962251 2537 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:57:47.962726 kubelet[2537]: I0116 08:57:47.962696 2537 state_mem.go:75] "Updated machine memory state" Jan 16 08:57:47.964061 kubelet[2537]: E0116 08:57:47.964035 2537 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 08:57:47.970036 kubelet[2537]: I0116 08:57:47.969988 2537 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:57:47.970247 kubelet[2537]: I0116 08:57:47.970233 2537 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 08:57:47.970301 kubelet[2537]: I0116 08:57:47.970249 2537 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 08:57:47.974091 kubelet[2537]: I0116 08:57:47.973654 2537 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:57:48.042425 sudo[2569]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 16 08:57:48.043773 sudo[2569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 16 08:57:48.078217 kubelet[2537]: I0116 08:57:48.077761 2537 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.117395 kubelet[2537]: I0116 08:57:48.116646 2537 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.117395 kubelet[2537]: I0116 08:57:48.116799 2537 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.197126 kubelet[2537]: W0116 08:57:48.197059 2537 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:57:48.199886 kubelet[2537]: W0116 08:57:48.199268 2537 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:57:48.202259 kubelet[2537]: W0116 08:57:48.200245 2537 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:57:48.254976 kubelet[2537]: I0116 08:57:48.254825 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46ea317bf8af53ddb3977b1ddcbf21f8-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-e-9b059e58c2\" (UID: \"46ea317bf8af53ddb3977b1ddcbf21f8\") " pod="kube-system/kube-scheduler-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.254976 kubelet[2537]: I0116 08:57:48.254890 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.254976 kubelet[2537]: I0116 08:57:48.254922 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256326 kubelet[2537]: I0116 08:57:48.256241 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256615 kubelet[2537]: I0116 08:57:48.256346 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256615 kubelet[2537]: I0116 08:57:48.256380 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32e37c008c8a171c8d22ac43a93521c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-e-9b059e58c2\" (UID: \"32e37c008c8a171c8d22ac43a93521c4\") " pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256615 kubelet[2537]: I0116 08:57:48.256411 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256615 kubelet[2537]: I0116 08:57:48.256440 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.256615 kubelet[2537]: I0116 08:57:48.256470 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b1a2a8dea19dbbcf40395d32c7855e4-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-e-9b059e58c2\" (UID: \"5b1a2a8dea19dbbcf40395d32c7855e4\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" Jan 16 08:57:48.498319 kubelet[2537]: E0116 08:57:48.498265 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.501977 kubelet[2537]: E0116 08:57:48.501899 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.502628 kubelet[2537]: E0116 08:57:48.502434 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.801956 kubelet[2537]: I0116 08:57:48.801901 2537 apiserver.go:52] "Watching apiserver" Jan 16 08:57:48.819064 sudo[2569]: pam_unix(sudo:session): session closed for user root Jan 16 08:57:48.852387 kubelet[2537]: I0116 08:57:48.852330 2537 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 16 08:57:48.913494 kubelet[2537]: E0116 08:57:48.912824 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.914117 kubelet[2537]: E0116 08:57:48.913788 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.914117 kubelet[2537]: E0116 08:57:48.914000 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:48.970903 kubelet[2537]: I0116 08:57:48.970810 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-e-9b059e58c2" podStartSLOduration=0.970780138 podStartE2EDuration="970.780138ms" podCreationTimestamp="2025-01-16 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:57:48.95127536 +0000 UTC m=+1.260419420" watchObservedRunningTime="2025-01-16 08:57:48.970780138 +0000 UTC m=+1.279924197" Jan 16 08:57:48.999684 kubelet[2537]: I0116 08:57:48.999407 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-e-9b059e58c2" podStartSLOduration=0.999374327 podStartE2EDuration="999.374327ms" podCreationTimestamp="2025-01-16 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:57:48.972758428 +0000 UTC m=+1.281902501" watchObservedRunningTime="2025-01-16 08:57:48.999374327 +0000 UTC m=+1.308518401" Jan 16 08:57:49.000974 kubelet[2537]: I0116 08:57:49.000504 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-e-9b059e58c2" podStartSLOduration=1.000478652 podStartE2EDuration="1.000478652s" podCreationTimestamp="2025-01-16 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:57:48.996620606 +0000 UTC m=+1.305764671" watchObservedRunningTime="2025-01-16 08:57:49.000478652 +0000 UTC m=+1.309622714" Jan 16 08:57:49.915153 kubelet[2537]: E0116 08:57:49.915109 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:50.672328 sudo[1655]: pam_unix(sudo:session): session closed for user root Jan 16 08:57:50.676374 sshd[1654]: Connection closed by 147.75.109.163 port 41326 Jan 16 08:57:50.678894 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jan 16 08:57:50.682925 systemd[1]: sshd@6-64.227.106.156:22-147.75.109.163:41326.service: Deactivated successfully. Jan 16 08:57:50.685844 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 08:57:50.686057 systemd[1]: session-7.scope: Consumed 6.397s CPU time, 147.9M memory peak, 0B memory swap peak. Jan 16 08:57:50.687725 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 16 08:57:50.689271 systemd-logind[1448]: Removed session 7. Jan 16 08:57:51.290338 kubelet[2537]: E0116 08:57:51.289854 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:53.731317 systemd-timesyncd[1340]: Contacted time server 104.167.215.195:123 (2.flatcar.pool.ntp.org). Jan 16 08:57:53.731375 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 16 08:57:53.731385 systemd-timesyncd[1340]: Initial clock synchronization to Thu 2025-01-16 08:57:53.730904 UTC. Jan 16 08:57:54.019103 kubelet[2537]: E0116 08:57:54.018902 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:54.127802 kubelet[2537]: I0116 08:57:54.127716 2537 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 08:57:54.128553 containerd[1468]: time="2025-01-16T08:57:54.128464048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 08:57:54.129013 kubelet[2537]: I0116 08:57:54.128761 2537 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 08:57:54.336073 kubelet[2537]: E0116 08:57:54.335494 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:55.110045 systemd[1]: Created slice kubepods-besteffort-pod4fd160ed_7c19_4eb1_8b13_e0a1273fc0d8.slice - libcontainer container kubepods-besteffort-pod4fd160ed_7c19_4eb1_8b13_e0a1273fc0d8.slice. Jan 16 08:57:55.120771 kubelet[2537]: I0116 08:57:55.118257 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8-kube-proxy\") pod \"kube-proxy-cv5sf\" (UID: \"4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8\") " pod="kube-system/kube-proxy-cv5sf" Jan 16 08:57:55.120771 kubelet[2537]: I0116 08:57:55.118302 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8-lib-modules\") pod \"kube-proxy-cv5sf\" (UID: \"4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8\") " pod="kube-system/kube-proxy-cv5sf" Jan 16 08:57:55.120771 kubelet[2537]: I0116 08:57:55.118322 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8-xtables-lock\") pod \"kube-proxy-cv5sf\" (UID: \"4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8\") " pod="kube-system/kube-proxy-cv5sf" Jan 16 08:57:55.120771 kubelet[2537]: I0116 08:57:55.118340 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brv9k\" (UniqueName: \"kubernetes.io/projected/4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8-kube-api-access-brv9k\") pod \"kube-proxy-cv5sf\" (UID: \"4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8\") " pod="kube-system/kube-proxy-cv5sf" Jan 16 08:57:55.134321 systemd[1]: Created slice kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice - libcontainer container kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice. Jan 16 08:57:55.139510 kubelet[2537]: W0116 08:57:55.139466 2537 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152.2.0-e-9b059e58c2" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-e-9b059e58c2' and this object Jan 16 08:57:55.140337 kubelet[2537]: E0116 08:57:55.139739 2537 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4152.2.0-e-9b059e58c2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.0-e-9b059e58c2' and this object" logger="UnhandledError" Jan 16 08:57:55.219386 kubelet[2537]: I0116 08:57:55.219313 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-run\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.220136 kubelet[2537]: I0116 08:57:55.219787 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7mn\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-kube-api-access-cv7mn\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.220136 kubelet[2537]: I0116 08:57:55.220007 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cni-path\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.220136 kubelet[2537]: I0116 08:57:55.220073 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-xtables-lock\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.220136 kubelet[2537]: I0116 08:57:55.220103 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-bpf-maps\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221064 kubelet[2537]: I0116 08:57:55.220617 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-hostproc\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221064 kubelet[2537]: I0116 08:57:55.220657 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-etc-cni-netd\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221547 kubelet[2537]: I0116 08:57:55.221278 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-config-path\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221547 kubelet[2537]: I0116 08:57:55.221350 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-kernel\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221547 kubelet[2537]: I0116 08:57:55.221414 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-net\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.221547 kubelet[2537]: I0116 08:57:55.221446 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-cgroup\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.223824 kubelet[2537]: I0116 08:57:55.222337 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-hubble-tls\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.223824 kubelet[2537]: I0116 08:57:55.222386 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-lib-modules\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.223824 kubelet[2537]: I0116 08:57:55.222427 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets\") pod \"cilium-vvqxp\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " pod="kube-system/cilium-vvqxp" Jan 16 08:57:55.252322 systemd[1]: Created slice kubepods-besteffort-pod2abbf928_a43b_436c_9638_d9f434aa963f.slice - libcontainer container kubepods-besteffort-pod2abbf928_a43b_436c_9638_d9f434aa963f.slice. Jan 16 08:57:55.323878 kubelet[2537]: I0116 08:57:55.323032 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8b2v\" (UniqueName: \"kubernetes.io/projected/2abbf928-a43b-436c-9638-d9f434aa963f-kube-api-access-r8b2v\") pod \"cilium-operator-5d85765b45-8gwhs\" (UID: \"2abbf928-a43b-436c-9638-d9f434aa963f\") " pod="kube-system/cilium-operator-5d85765b45-8gwhs" Jan 16 08:57:55.323878 kubelet[2537]: I0116 08:57:55.323103 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2abbf928-a43b-436c-9638-d9f434aa963f-cilium-config-path\") pod \"cilium-operator-5d85765b45-8gwhs\" (UID: \"2abbf928-a43b-436c-9638-d9f434aa963f\") " pod="kube-system/cilium-operator-5d85765b45-8gwhs" Jan 16 08:57:55.422348 kubelet[2537]: E0116 08:57:55.422203 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:55.425512 containerd[1468]: time="2025-01-16T08:57:55.424931529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cv5sf,Uid:4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:55.468119 containerd[1468]: time="2025-01-16T08:57:55.467890864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:55.468119 containerd[1468]: time="2025-01-16T08:57:55.468054449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:55.468119 containerd[1468]: time="2025-01-16T08:57:55.468080565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:55.468662 containerd[1468]: time="2025-01-16T08:57:55.468272585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:55.495173 systemd[1]: Started cri-containerd-016ef33f1ea160d20e608368c5a9203cc11bae4de5765fdf46312ab6e34ba4b4.scope - libcontainer container 016ef33f1ea160d20e608368c5a9203cc11bae4de5765fdf46312ab6e34ba4b4. Jan 16 08:57:55.526461 containerd[1468]: time="2025-01-16T08:57:55.526406680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cv5sf,Uid:4fd160ed-7c19-4eb1-8b13-e0a1273fc0d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"016ef33f1ea160d20e608368c5a9203cc11bae4de5765fdf46312ab6e34ba4b4\"" Jan 16 08:57:55.527692 kubelet[2537]: E0116 08:57:55.527657 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:55.532745 containerd[1468]: time="2025-01-16T08:57:55.532685137Z" level=info msg="CreateContainer within sandbox \"016ef33f1ea160d20e608368c5a9203cc11bae4de5765fdf46312ab6e34ba4b4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 08:57:55.554681 containerd[1468]: time="2025-01-16T08:57:55.554628810Z" level=info msg="CreateContainer within sandbox \"016ef33f1ea160d20e608368c5a9203cc11bae4de5765fdf46312ab6e34ba4b4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad4e54e327956f546b100750661060203d986459ac75a249b2a5cf3d54012bc5\"" Jan 16 08:57:55.555981 containerd[1468]: time="2025-01-16T08:57:55.555451919Z" level=info msg="StartContainer for \"ad4e54e327956f546b100750661060203d986459ac75a249b2a5cf3d54012bc5\"" Jan 16 08:57:55.571549 kubelet[2537]: E0116 08:57:55.571140 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:55.572663 containerd[1468]: time="2025-01-16T08:57:55.572507864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8gwhs,Uid:2abbf928-a43b-436c-9638-d9f434aa963f,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:55.602346 systemd[1]: Started cri-containerd-ad4e54e327956f546b100750661060203d986459ac75a249b2a5cf3d54012bc5.scope - libcontainer container ad4e54e327956f546b100750661060203d986459ac75a249b2a5cf3d54012bc5. Jan 16 08:57:55.639646 containerd[1468]: time="2025-01-16T08:57:55.639347802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:55.639646 containerd[1468]: time="2025-01-16T08:57:55.639498947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:55.639646 containerd[1468]: time="2025-01-16T08:57:55.639524327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:55.640714 containerd[1468]: time="2025-01-16T08:57:55.640633831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:55.672426 systemd[1]: Started cri-containerd-94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78.scope - libcontainer container 94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78. Jan 16 08:57:55.679958 containerd[1468]: time="2025-01-16T08:57:55.679813243Z" level=info msg="StartContainer for \"ad4e54e327956f546b100750661060203d986459ac75a249b2a5cf3d54012bc5\" returns successfully" Jan 16 08:57:55.757238 containerd[1468]: time="2025-01-16T08:57:55.757182291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8gwhs,Uid:2abbf928-a43b-436c-9638-d9f434aa963f,Namespace:kube-system,Attempt:0,} returns sandbox id \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\"" Jan 16 08:57:55.760767 kubelet[2537]: E0116 08:57:55.758886 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:55.762164 containerd[1468]: time="2025-01-16T08:57:55.761917402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 16 08:57:56.012156 kubelet[2537]: E0116 08:57:56.011668 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:56.325148 kubelet[2537]: E0116 08:57:56.324439 2537 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 16 08:57:56.326137 kubelet[2537]: E0116 08:57:56.326091 2537 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets podName:af5700ec-8778-4db6-9014-9cef58e0ee89 nodeName:}" failed. No retries permitted until 2025-01-16 08:57:56.824562347 +0000 UTC m=+8.723943549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets") pod "cilium-vvqxp" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89") : failed to sync secret cache: timed out waiting for the condition Jan 16 08:57:56.343407 kubelet[2537]: E0116 08:57:56.343106 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:56.343407 kubelet[2537]: E0116 08:57:56.343395 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:56.361968 kubelet[2537]: I0116 08:57:56.360953 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cv5sf" podStartSLOduration=1.360915627 podStartE2EDuration="1.360915627s" podCreationTimestamp="2025-01-16 08:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:57:56.360831803 +0000 UTC m=+8.260213007" watchObservedRunningTime="2025-01-16 08:57:56.360915627 +0000 UTC m=+8.260296831" Jan 16 08:57:56.939251 kubelet[2537]: E0116 08:57:56.938494 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:56.946282 containerd[1468]: time="2025-01-16T08:57:56.946227948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvqxp,Uid:af5700ec-8778-4db6-9014-9cef58e0ee89,Namespace:kube-system,Attempt:0,}" Jan 16 08:57:57.005646 containerd[1468]: time="2025-01-16T08:57:57.005076464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:57:57.006088 containerd[1468]: time="2025-01-16T08:57:57.006011420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:57:57.006088 containerd[1468]: time="2025-01-16T08:57:57.006067547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:57.006346 containerd[1468]: time="2025-01-16T08:57:57.006282931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:57:57.049378 systemd[1]: Started cri-containerd-7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90.scope - libcontainer container 7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90. Jan 16 08:57:57.091852 containerd[1468]: time="2025-01-16T08:57:57.091491330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvqxp,Uid:af5700ec-8778-4db6-9014-9cef58e0ee89,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\"" Jan 16 08:57:57.094004 kubelet[2537]: E0116 08:57:57.092651 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:57.346236 kubelet[2537]: E0116 08:57:57.346096 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:57:58.544739 containerd[1468]: time="2025-01-16T08:57:58.544160609Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:58.545808 containerd[1468]: time="2025-01-16T08:57:58.545750229Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Jan 16 08:57:58.546018 containerd[1468]: time="2025-01-16T08:57:58.545994119Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:57:58.548354 containerd[1468]: time="2025-01-16T08:57:58.548309787Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.786317494s" Jan 16 08:57:58.548598 containerd[1468]: time="2025-01-16T08:57:58.548494560Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 16 08:57:58.551500 containerd[1468]: time="2025-01-16T08:57:58.551448476Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 16 08:57:58.555582 containerd[1468]: time="2025-01-16T08:57:58.555373681Z" level=info msg="CreateContainer within sandbox \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 16 08:57:58.570665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255394230.mount: Deactivated successfully. Jan 16 08:57:58.581860 containerd[1468]: time="2025-01-16T08:57:58.581794517Z" level=info msg="CreateContainer within sandbox \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\"" Jan 16 08:57:58.582724 containerd[1468]: time="2025-01-16T08:57:58.582356222Z" level=info msg="StartContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\"" Jan 16 08:57:58.622800 systemd[1]: run-containerd-runc-k8s.io-5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a-runc.FJAEM8.mount: Deactivated successfully. Jan 16 08:57:58.637404 systemd[1]: Started cri-containerd-5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a.scope - libcontainer container 5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a. Jan 16 08:57:58.672142 containerd[1468]: time="2025-01-16T08:57:58.672074133Z" level=info msg="StartContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" returns successfully" Jan 16 08:57:59.354026 kubelet[2537]: E0116 08:57:59.353767 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:00.357586 kubelet[2537]: E0116 08:58:00.356918 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:01.735553 kubelet[2537]: E0116 08:58:01.735215 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:01.801371 kubelet[2537]: I0116 08:58:01.790157 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8gwhs" podStartSLOduration=4.001703181 podStartE2EDuration="6.790125376s" podCreationTimestamp="2025-01-16 08:57:55 +0000 UTC" firstStartedPulling="2025-01-16 08:57:55.761195472 +0000 UTC m=+7.660576667" lastFinishedPulling="2025-01-16 08:57:58.549617675 +0000 UTC m=+10.448998862" observedRunningTime="2025-01-16 08:57:59.427434086 +0000 UTC m=+11.326815289" watchObservedRunningTime="2025-01-16 08:58:01.790125376 +0000 UTC m=+13.689506587" Jan 16 08:58:03.961025 update_engine[1449]: I20250116 08:58:03.959982 1449 update_attempter.cc:509] Updating boot flags... Jan 16 08:58:04.042764 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2968) Jan 16 08:58:04.120448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2972) Jan 16 08:58:04.208989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2972) Jan 16 08:58:04.775626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234409115.mount: Deactivated successfully. Jan 16 08:58:09.883620 containerd[1468]: time="2025-01-16T08:58:09.883489515Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:09.885332 containerd[1468]: time="2025-01-16T08:58:09.885260584Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735375" Jan 16 08:58:09.886599 containerd[1468]: time="2025-01-16T08:58:09.886549886Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:09.898590 containerd[1468]: time="2025-01-16T08:58:09.898493247Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.346994488s" Jan 16 08:58:09.899368 containerd[1468]: time="2025-01-16T08:58:09.898565840Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 16 08:58:09.903389 containerd[1468]: time="2025-01-16T08:58:09.903015724Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 08:58:09.992052 containerd[1468]: time="2025-01-16T08:58:09.991886071Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\"" Jan 16 08:58:09.993037 containerd[1468]: time="2025-01-16T08:58:09.993012690Z" level=info msg="StartContainer for \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\"" Jan 16 08:58:10.116292 systemd[1]: Started cri-containerd-145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa.scope - libcontainer container 145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa. Jan 16 08:58:10.158469 containerd[1468]: time="2025-01-16T08:58:10.157898815Z" level=info msg="StartContainer for \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\" returns successfully" Jan 16 08:58:10.172197 systemd[1]: cri-containerd-145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa.scope: Deactivated successfully. Jan 16 08:58:10.267559 containerd[1468]: time="2025-01-16T08:58:10.241331326Z" level=info msg="shim disconnected" id=145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa namespace=k8s.io Jan 16 08:58:10.267559 containerd[1468]: time="2025-01-16T08:58:10.267354892Z" level=warning msg="cleaning up after shim disconnected" id=145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa namespace=k8s.io Jan 16 08:58:10.267559 containerd[1468]: time="2025-01-16T08:58:10.267370903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:58:10.382852 kubelet[2537]: E0116 08:58:10.382795 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:10.387979 containerd[1468]: time="2025-01-16T08:58:10.387353581Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 08:58:10.407885 containerd[1468]: time="2025-01-16T08:58:10.407700389Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\"" Jan 16 08:58:10.409041 containerd[1468]: time="2025-01-16T08:58:10.408682807Z" level=info msg="StartContainer for \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\"" Jan 16 08:58:10.451167 systemd[1]: Started cri-containerd-ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069.scope - libcontainer container ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069. Jan 16 08:58:10.493596 containerd[1468]: time="2025-01-16T08:58:10.493290418Z" level=info msg="StartContainer for \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\" returns successfully" Jan 16 08:58:10.506161 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:58:10.506409 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:10.506502 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:10.513510 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:10.514236 systemd[1]: cri-containerd-ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069.scope: Deactivated successfully. Jan 16 08:58:10.553014 containerd[1468]: time="2025-01-16T08:58:10.552947571Z" level=info msg="shim disconnected" id=ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069 namespace=k8s.io Jan 16 08:58:10.553014 containerd[1468]: time="2025-01-16T08:58:10.553015416Z" level=warning msg="cleaning up after shim disconnected" id=ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069 namespace=k8s.io Jan 16 08:58:10.553014 containerd[1468]: time="2025-01-16T08:58:10.553025396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:58:10.558139 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:10.984730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa-rootfs.mount: Deactivated successfully. Jan 16 08:58:11.388477 kubelet[2537]: E0116 08:58:11.387163 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:11.390925 containerd[1468]: time="2025-01-16T08:58:11.390862380Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 08:58:11.430786 containerd[1468]: time="2025-01-16T08:58:11.430722706Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\"" Jan 16 08:58:11.432691 containerd[1468]: time="2025-01-16T08:58:11.432634494Z" level=info msg="StartContainer for \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\"" Jan 16 08:58:11.483243 systemd[1]: Started cri-containerd-7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5.scope - libcontainer container 7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5. Jan 16 08:58:11.534874 containerd[1468]: time="2025-01-16T08:58:11.534704310Z" level=info msg="StartContainer for \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\" returns successfully" Jan 16 08:58:11.534894 systemd[1]: cri-containerd-7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5.scope: Deactivated successfully. Jan 16 08:58:11.573023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5-rootfs.mount: Deactivated successfully. Jan 16 08:58:11.579993 containerd[1468]: time="2025-01-16T08:58:11.579523747Z" level=info msg="shim disconnected" id=7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5 namespace=k8s.io Jan 16 08:58:11.579993 containerd[1468]: time="2025-01-16T08:58:11.579676218Z" level=warning msg="cleaning up after shim disconnected" id=7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5 namespace=k8s.io Jan 16 08:58:11.579993 containerd[1468]: time="2025-01-16T08:58:11.579687303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:58:12.393217 kubelet[2537]: E0116 08:58:12.392394 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:12.399608 containerd[1468]: time="2025-01-16T08:58:12.399463396Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 08:58:12.452316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432780492.mount: Deactivated successfully. Jan 16 08:58:12.463352 containerd[1468]: time="2025-01-16T08:58:12.463265137Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\"" Jan 16 08:58:12.464369 containerd[1468]: time="2025-01-16T08:58:12.464330498Z" level=info msg="StartContainer for \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\"" Jan 16 08:58:12.508250 systemd[1]: Started cri-containerd-2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce.scope - libcontainer container 2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce. Jan 16 08:58:12.545313 systemd[1]: cri-containerd-2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce.scope: Deactivated successfully. Jan 16 08:58:12.558169 containerd[1468]: time="2025-01-16T08:58:12.549718076Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice/cri-containerd-2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce.scope/memory.events\": no such file or directory" Jan 16 08:58:12.558847 containerd[1468]: time="2025-01-16T08:58:12.558573936Z" level=info msg="StartContainer for \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\" returns successfully" Jan 16 08:58:12.592584 containerd[1468]: time="2025-01-16T08:58:12.592513132Z" level=info msg="shim disconnected" id=2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce namespace=k8s.io Jan 16 08:58:12.592584 containerd[1468]: time="2025-01-16T08:58:12.592572547Z" level=warning msg="cleaning up after shim disconnected" id=2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce namespace=k8s.io Jan 16 08:58:12.592584 containerd[1468]: time="2025-01-16T08:58:12.592581745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:58:13.397611 kubelet[2537]: E0116 08:58:13.397575 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:13.399928 containerd[1468]: time="2025-01-16T08:58:13.399896147Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 08:58:13.441930 systemd[1]: run-containerd-runc-k8s.io-2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce-runc.5ZcUgp.mount: Deactivated successfully. Jan 16 08:58:13.442086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce-rootfs.mount: Deactivated successfully. Jan 16 08:58:13.447998 containerd[1468]: time="2025-01-16T08:58:13.447887359Z" level=info msg="CreateContainer within sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\"" Jan 16 08:58:13.449989 containerd[1468]: time="2025-01-16T08:58:13.448689284Z" level=info msg="StartContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\"" Jan 16 08:58:13.486791 systemd[1]: run-containerd-runc-k8s.io-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060-runc.CyMG7Y.mount: Deactivated successfully. Jan 16 08:58:13.498235 systemd[1]: Started cri-containerd-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060.scope - libcontainer container f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060. Jan 16 08:58:13.546759 containerd[1468]: time="2025-01-16T08:58:13.545895989Z" level=info msg="StartContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" returns successfully" Jan 16 08:58:13.799024 kubelet[2537]: I0116 08:58:13.798332 2537 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 16 08:58:13.880337 systemd[1]: Created slice kubepods-burstable-pod4b3c020c_8a2e_48c6_a30c_332ec9f0d91b.slice - libcontainer container kubepods-burstable-pod4b3c020c_8a2e_48c6_a30c_332ec9f0d91b.slice. Jan 16 08:58:13.896534 systemd[1]: Created slice kubepods-burstable-pod9f448c59_ebf9_4b23_9a10_4cc0835155b2.slice - libcontainer container kubepods-burstable-pod9f448c59_ebf9_4b23_9a10_4cc0835155b2.slice. Jan 16 08:58:13.964913 kubelet[2537]: I0116 08:58:13.964719 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsgb\" (UniqueName: \"kubernetes.io/projected/4b3c020c-8a2e-48c6-a30c-332ec9f0d91b-kube-api-access-nbsgb\") pod \"coredns-6f6b679f8f-5ltbp\" (UID: \"4b3c020c-8a2e-48c6-a30c-332ec9f0d91b\") " pod="kube-system/coredns-6f6b679f8f-5ltbp" Jan 16 08:58:13.964913 kubelet[2537]: I0116 08:58:13.964788 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b3c020c-8a2e-48c6-a30c-332ec9f0d91b-config-volume\") pod \"coredns-6f6b679f8f-5ltbp\" (UID: \"4b3c020c-8a2e-48c6-a30c-332ec9f0d91b\") " pod="kube-system/coredns-6f6b679f8f-5ltbp" Jan 16 08:58:13.964913 kubelet[2537]: I0116 08:58:13.964823 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f448c59-ebf9-4b23-9a10-4cc0835155b2-config-volume\") pod \"coredns-6f6b679f8f-5b5c4\" (UID: \"9f448c59-ebf9-4b23-9a10-4cc0835155b2\") " pod="kube-system/coredns-6f6b679f8f-5b5c4" Jan 16 08:58:13.964913 kubelet[2537]: I0116 08:58:13.964854 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjc8\" (UniqueName: \"kubernetes.io/projected/9f448c59-ebf9-4b23-9a10-4cc0835155b2-kube-api-access-hpjc8\") pod \"coredns-6f6b679f8f-5b5c4\" (UID: \"9f448c59-ebf9-4b23-9a10-4cc0835155b2\") " pod="kube-system/coredns-6f6b679f8f-5b5c4" Jan 16 08:58:14.192893 kubelet[2537]: E0116 08:58:14.192715 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:14.196574 containerd[1468]: time="2025-01-16T08:58:14.196503664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5ltbp,Uid:4b3c020c-8a2e-48c6-a30c-332ec9f0d91b,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:14.203767 kubelet[2537]: E0116 08:58:14.201700 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:14.204598 containerd[1468]: time="2025-01-16T08:58:14.204143867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5b5c4,Uid:9f448c59-ebf9-4b23-9a10-4cc0835155b2,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:14.406341 kubelet[2537]: E0116 08:58:14.405774 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:14.437125 kubelet[2537]: I0116 08:58:14.436961 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvqxp" podStartSLOduration=6.630677516 podStartE2EDuration="19.436931708s" podCreationTimestamp="2025-01-16 08:57:55 +0000 UTC" firstStartedPulling="2025-01-16 08:57:57.095219342 +0000 UTC m=+8.994600525" lastFinishedPulling="2025-01-16 08:58:09.901473521 +0000 UTC m=+21.800854717" observedRunningTime="2025-01-16 08:58:14.435828776 +0000 UTC m=+26.335209980" watchObservedRunningTime="2025-01-16 08:58:14.436931708 +0000 UTC m=+26.336312922" Jan 16 08:58:15.407927 kubelet[2537]: E0116 08:58:15.407682 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:16.002151 systemd-networkd[1376]: cilium_host: Link UP Jan 16 08:58:16.002395 systemd-networkd[1376]: cilium_net: Link UP Jan 16 08:58:16.002400 systemd-networkd[1376]: cilium_net: Gained carrier Jan 16 08:58:16.002689 systemd-networkd[1376]: cilium_host: Gained carrier Jan 16 08:58:16.174116 systemd-networkd[1376]: cilium_vxlan: Link UP Jan 16 08:58:16.174126 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jan 16 08:58:16.410556 kubelet[2537]: E0116 08:58:16.410494 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:16.434779 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jan 16 08:58:16.468016 kernel: NET: Registered PF_ALG protocol family Jan 16 08:58:16.578198 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jan 16 08:58:17.383554 systemd-networkd[1376]: lxc_health: Link UP Jan 16 08:58:17.386417 systemd-networkd[1376]: lxc_health: Gained carrier Jan 16 08:58:17.474153 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jan 16 08:58:17.821096 kernel: eth0: renamed from tmp4e9bf Jan 16 08:58:17.823052 systemd-networkd[1376]: lxc30a16fb23eec: Link UP Jan 16 08:58:17.831062 kernel: eth0: renamed from tmp86908 Jan 16 08:58:17.836514 systemd-networkd[1376]: tmp86908: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 08:58:17.836633 systemd-networkd[1376]: tmp86908: Cannot enable IPv6, ignoring: No such file or directory Jan 16 08:58:17.836669 systemd-networkd[1376]: tmp86908: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jan 16 08:58:17.836683 systemd-networkd[1376]: tmp86908: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jan 16 08:58:17.836696 systemd-networkd[1376]: tmp86908: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jan 16 08:58:17.836713 systemd-networkd[1376]: tmp86908: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jan 16 08:58:17.840309 systemd-networkd[1376]: lxc113a5acf373e: Link UP Jan 16 08:58:17.840688 systemd-networkd[1376]: lxc30a16fb23eec: Gained carrier Jan 16 08:58:17.846141 systemd-networkd[1376]: lxc113a5acf373e: Gained carrier Jan 16 08:58:18.942895 kubelet[2537]: E0116 08:58:18.942841 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:19.203509 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 16 08:58:19.420797 kubelet[2537]: E0116 08:58:19.420475 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:19.650680 systemd-networkd[1376]: lxc113a5acf373e: Gained IPv6LL Jan 16 08:58:19.906535 systemd-networkd[1376]: lxc30a16fb23eec: Gained IPv6LL Jan 16 08:58:20.424158 kubelet[2537]: E0116 08:58:20.424011 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:22.750741 containerd[1468]: time="2025-01-16T08:58:22.750510423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:22.756419 containerd[1468]: time="2025-01-16T08:58:22.750668829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:22.756419 containerd[1468]: time="2025-01-16T08:58:22.751206054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:22.756419 containerd[1468]: time="2025-01-16T08:58:22.751381200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:22.760239 containerd[1468]: time="2025-01-16T08:58:22.759674678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:22.760239 containerd[1468]: time="2025-01-16T08:58:22.759828648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:22.760239 containerd[1468]: time="2025-01-16T08:58:22.759846064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:22.760239 containerd[1468]: time="2025-01-16T08:58:22.760003783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:22.807280 systemd[1]: Started cri-containerd-4e9bfb232e71af8d3bac8ec93becfe28339bbafffa03f21f975472b19e7bcbf0.scope - libcontainer container 4e9bfb232e71af8d3bac8ec93becfe28339bbafffa03f21f975472b19e7bcbf0. Jan 16 08:58:22.819584 systemd[1]: Started cri-containerd-86908b96867e92ea53439e56c2e8073e2a7c0089bc9c1ec5b3901a08a6325f3a.scope - libcontainer container 86908b96867e92ea53439e56c2e8073e2a7c0089bc9c1ec5b3901a08a6325f3a. Jan 16 08:58:22.914461 containerd[1468]: time="2025-01-16T08:58:22.914412474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5ltbp,Uid:4b3c020c-8a2e-48c6-a30c-332ec9f0d91b,Namespace:kube-system,Attempt:0,} returns sandbox id \"86908b96867e92ea53439e56c2e8073e2a7c0089bc9c1ec5b3901a08a6325f3a\"" Jan 16 08:58:22.918956 kubelet[2537]: E0116 08:58:22.916641 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:22.937704 containerd[1468]: time="2025-01-16T08:58:22.937574019Z" level=info msg="CreateContainer within sandbox \"86908b96867e92ea53439e56c2e8073e2a7c0089bc9c1ec5b3901a08a6325f3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:58:22.955832 containerd[1468]: time="2025-01-16T08:58:22.955782456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5b5c4,Uid:9f448c59-ebf9-4b23-9a10-4cc0835155b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9bfb232e71af8d3bac8ec93becfe28339bbafffa03f21f975472b19e7bcbf0\"" Jan 16 08:58:22.958089 kubelet[2537]: E0116 08:58:22.957557 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:22.962343 containerd[1468]: time="2025-01-16T08:58:22.962108283Z" level=info msg="CreateContainer within sandbox \"4e9bfb232e71af8d3bac8ec93becfe28339bbafffa03f21f975472b19e7bcbf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:58:22.965727 containerd[1468]: time="2025-01-16T08:58:22.965082973Z" level=info msg="CreateContainer within sandbox \"86908b96867e92ea53439e56c2e8073e2a7c0089bc9c1ec5b3901a08a6325f3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43e6137994aa5ecf201bd7c022414b08a7ea312b056a07e08b0960635be9f45a\"" Jan 16 08:58:22.971815 containerd[1468]: time="2025-01-16T08:58:22.970754064Z" level=info msg="StartContainer for \"43e6137994aa5ecf201bd7c022414b08a7ea312b056a07e08b0960635be9f45a\"" Jan 16 08:58:23.009762 containerd[1468]: time="2025-01-16T08:58:23.008699381Z" level=info msg="CreateContainer within sandbox \"4e9bfb232e71af8d3bac8ec93becfe28339bbafffa03f21f975472b19e7bcbf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3585726f289d9357bc5e57d0e59b3609df38f9771dc78e8d15f44b55afacc066\"" Jan 16 08:58:23.011248 containerd[1468]: time="2025-01-16T08:58:23.010748086Z" level=info msg="StartContainer for \"3585726f289d9357bc5e57d0e59b3609df38f9771dc78e8d15f44b55afacc066\"" Jan 16 08:58:23.030261 systemd[1]: Started cri-containerd-43e6137994aa5ecf201bd7c022414b08a7ea312b056a07e08b0960635be9f45a.scope - libcontainer container 43e6137994aa5ecf201bd7c022414b08a7ea312b056a07e08b0960635be9f45a. Jan 16 08:58:23.071343 systemd[1]: Started cri-containerd-3585726f289d9357bc5e57d0e59b3609df38f9771dc78e8d15f44b55afacc066.scope - libcontainer container 3585726f289d9357bc5e57d0e59b3609df38f9771dc78e8d15f44b55afacc066. Jan 16 08:58:23.100727 containerd[1468]: time="2025-01-16T08:58:23.100628296Z" level=info msg="StartContainer for \"43e6137994aa5ecf201bd7c022414b08a7ea312b056a07e08b0960635be9f45a\" returns successfully" Jan 16 08:58:23.134010 containerd[1468]: time="2025-01-16T08:58:23.133806221Z" level=info msg="StartContainer for \"3585726f289d9357bc5e57d0e59b3609df38f9771dc78e8d15f44b55afacc066\" returns successfully" Jan 16 08:58:23.433878 kubelet[2537]: E0116 08:58:23.433438 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:23.437984 kubelet[2537]: E0116 08:58:23.437949 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:23.456199 kubelet[2537]: I0116 08:58:23.455754 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5b5c4" podStartSLOduration=28.455725836 podStartE2EDuration="28.455725836s" podCreationTimestamp="2025-01-16 08:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:23.454703538 +0000 UTC m=+35.354084741" watchObservedRunningTime="2025-01-16 08:58:23.455725836 +0000 UTC m=+35.355107040" Jan 16 08:58:23.477039 kubelet[2537]: I0116 08:58:23.476704 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5ltbp" podStartSLOduration=28.476685321 podStartE2EDuration="28.476685321s" podCreationTimestamp="2025-01-16 08:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:23.475400561 +0000 UTC m=+35.374781809" watchObservedRunningTime="2025-01-16 08:58:23.476685321 +0000 UTC m=+35.376066524" Jan 16 08:58:23.767916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931332442.mount: Deactivated successfully. Jan 16 08:58:24.440320 kubelet[2537]: E0116 08:58:24.439823 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:24.441322 kubelet[2537]: E0116 08:58:24.440387 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:25.441904 kubelet[2537]: E0116 08:58:25.441851 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:25.442591 kubelet[2537]: E0116 08:58:25.442562 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:26.200464 systemd[1]: Started sshd@7-64.227.106.156:22-147.75.109.163:46512.service - OpenSSH per-connection server daemon (147.75.109.163:46512). Jan 16 08:58:26.349612 sshd[3928]: Accepted publickey for core from 147.75.109.163 port 46512 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:26.351521 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:26.358716 systemd-logind[1448]: New session 8 of user core. Jan 16 08:58:26.370217 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 08:58:27.008242 sshd[3930]: Connection closed by 147.75.109.163 port 46512 Jan 16 08:58:27.008725 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:27.020550 systemd[1]: sshd@7-64.227.106.156:22-147.75.109.163:46512.service: Deactivated successfully. Jan 16 08:58:27.026137 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 08:58:27.031654 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 16 08:58:27.034828 systemd-logind[1448]: Removed session 8. Jan 16 08:58:32.035380 systemd[1]: Started sshd@8-64.227.106.156:22-147.75.109.163:33148.service - OpenSSH per-connection server daemon (147.75.109.163:33148). Jan 16 08:58:32.127336 sshd[3944]: Accepted publickey for core from 147.75.109.163 port 33148 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:32.129772 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:32.136039 systemd-logind[1448]: New session 9 of user core. Jan 16 08:58:32.143263 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 08:58:32.335906 sshd[3946]: Connection closed by 147.75.109.163 port 33148 Jan 16 08:58:32.334765 sshd-session[3944]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:32.340590 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 16 08:58:32.342109 systemd[1]: sshd@8-64.227.106.156:22-147.75.109.163:33148.service: Deactivated successfully. Jan 16 08:58:32.345356 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 08:58:32.347310 systemd-logind[1448]: Removed session 9. Jan 16 08:58:37.358888 systemd[1]: Started sshd@9-64.227.106.156:22-147.75.109.163:54180.service - OpenSSH per-connection server daemon (147.75.109.163:54180). Jan 16 08:58:37.421841 sshd[3957]: Accepted publickey for core from 147.75.109.163 port 54180 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:37.423850 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:37.430857 systemd-logind[1448]: New session 10 of user core. Jan 16 08:58:37.437442 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 08:58:37.603828 sshd[3959]: Connection closed by 147.75.109.163 port 54180 Jan 16 08:58:37.604554 sshd-session[3957]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:37.615736 systemd[1]: sshd@9-64.227.106.156:22-147.75.109.163:54180.service: Deactivated successfully. Jan 16 08:58:37.620310 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 08:58:37.621852 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 16 08:58:37.624793 systemd-logind[1448]: Removed session 10. Jan 16 08:58:42.626399 systemd[1]: Started sshd@10-64.227.106.156:22-147.75.109.163:54194.service - OpenSSH per-connection server daemon (147.75.109.163:54194). Jan 16 08:58:42.708858 sshd[3971]: Accepted publickey for core from 147.75.109.163 port 54194 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:42.711765 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:42.720217 systemd-logind[1448]: New session 11 of user core. Jan 16 08:58:42.727412 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 08:58:42.897520 sshd[3973]: Connection closed by 147.75.109.163 port 54194 Jan 16 08:58:42.900140 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:42.910202 systemd[1]: sshd@10-64.227.106.156:22-147.75.109.163:54194.service: Deactivated successfully. Jan 16 08:58:42.912706 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 08:58:42.914030 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 16 08:58:42.924601 systemd[1]: Started sshd@11-64.227.106.156:22-147.75.109.163:54210.service - OpenSSH per-connection server daemon (147.75.109.163:54210). Jan 16 08:58:42.928022 systemd-logind[1448]: Removed session 11. Jan 16 08:58:43.008767 sshd[3986]: Accepted publickey for core from 147.75.109.163 port 54210 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:43.011233 sshd-session[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:43.017921 systemd-logind[1448]: New session 12 of user core. Jan 16 08:58:43.022271 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 08:58:43.249802 sshd[3988]: Connection closed by 147.75.109.163 port 54210 Jan 16 08:58:43.252803 sshd-session[3986]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:43.262183 systemd[1]: sshd@11-64.227.106.156:22-147.75.109.163:54210.service: Deactivated successfully. Jan 16 08:58:43.267058 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 08:58:43.272515 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 16 08:58:43.277404 systemd[1]: Started sshd@12-64.227.106.156:22-147.75.109.163:54220.service - OpenSSH per-connection server daemon (147.75.109.163:54220). Jan 16 08:58:43.281910 systemd-logind[1448]: Removed session 12. Jan 16 08:58:43.376442 sshd[3997]: Accepted publickey for core from 147.75.109.163 port 54220 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:43.378449 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:43.384499 systemd-logind[1448]: New session 13 of user core. Jan 16 08:58:43.388294 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 08:58:43.558881 sshd[3999]: Connection closed by 147.75.109.163 port 54220 Jan 16 08:58:43.560116 sshd-session[3997]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:43.564829 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 16 08:58:43.565482 systemd[1]: sshd@12-64.227.106.156:22-147.75.109.163:54220.service: Deactivated successfully. Jan 16 08:58:43.568325 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 08:58:43.571426 systemd-logind[1448]: Removed session 13. Jan 16 08:58:48.578351 systemd[1]: Started sshd@13-64.227.106.156:22-147.75.109.163:51918.service - OpenSSH per-connection server daemon (147.75.109.163:51918). Jan 16 08:58:48.663054 sshd[4012]: Accepted publickey for core from 147.75.109.163 port 51918 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:48.664704 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:48.670196 systemd-logind[1448]: New session 14 of user core. Jan 16 08:58:48.677309 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 08:58:48.824193 sshd[4014]: Connection closed by 147.75.109.163 port 51918 Jan 16 08:58:48.825194 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:48.835955 systemd[1]: sshd@13-64.227.106.156:22-147.75.109.163:51918.service: Deactivated successfully. Jan 16 08:58:48.838597 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 08:58:48.839713 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 16 08:58:48.842157 systemd-logind[1448]: Removed session 14. Jan 16 08:58:53.843373 systemd[1]: Started sshd@14-64.227.106.156:22-147.75.109.163:51924.service - OpenSSH per-connection server daemon (147.75.109.163:51924). Jan 16 08:58:53.921064 sshd[4026]: Accepted publickey for core from 147.75.109.163 port 51924 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:53.921786 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:53.926776 systemd-logind[1448]: New session 15 of user core. Jan 16 08:58:53.933328 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 08:58:54.071517 sshd[4028]: Connection closed by 147.75.109.163 port 51924 Jan 16 08:58:54.073314 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.082487 systemd[1]: sshd@14-64.227.106.156:22-147.75.109.163:51924.service: Deactivated successfully. Jan 16 08:58:54.085217 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 08:58:54.087590 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 16 08:58:54.096409 systemd[1]: Started sshd@15-64.227.106.156:22-147.75.109.163:51938.service - OpenSSH per-connection server daemon (147.75.109.163:51938). Jan 16 08:58:54.099708 systemd-logind[1448]: Removed session 15. Jan 16 08:58:54.151624 sshd[4039]: Accepted publickey for core from 147.75.109.163 port 51938 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:54.152538 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.158849 systemd-logind[1448]: New session 16 of user core. Jan 16 08:58:54.170250 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 08:58:54.553616 sshd[4041]: Connection closed by 147.75.109.163 port 51938 Jan 16 08:58:54.556497 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:54.567057 systemd[1]: sshd@15-64.227.106.156:22-147.75.109.163:51938.service: Deactivated successfully. Jan 16 08:58:54.569992 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 08:58:54.572465 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 16 08:58:54.580495 systemd[1]: Started sshd@16-64.227.106.156:22-147.75.109.163:51946.service - OpenSSH per-connection server daemon (147.75.109.163:51946). Jan 16 08:58:54.582825 systemd-logind[1448]: Removed session 16. Jan 16 08:58:54.666984 sshd[4050]: Accepted publickey for core from 147.75.109.163 port 51946 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:54.668301 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:54.678121 systemd-logind[1448]: New session 17 of user core. Jan 16 08:58:54.684278 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 08:58:56.638315 sshd[4052]: Connection closed by 147.75.109.163 port 51946 Jan 16 08:58:56.640637 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:56.660191 systemd[1]: sshd@16-64.227.106.156:22-147.75.109.163:51946.service: Deactivated successfully. Jan 16 08:58:56.665523 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 08:58:56.672229 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 16 08:58:56.680505 systemd[1]: Started sshd@17-64.227.106.156:22-147.75.109.163:51950.service - OpenSSH per-connection server daemon (147.75.109.163:51950). Jan 16 08:58:56.685151 systemd-logind[1448]: Removed session 17. Jan 16 08:58:56.745126 sshd[4071]: Accepted publickey for core from 147.75.109.163 port 51950 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:56.747805 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:56.754486 systemd-logind[1448]: New session 18 of user core. Jan 16 08:58:56.760660 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 08:58:57.140528 sshd[4073]: Connection closed by 147.75.109.163 port 51950 Jan 16 08:58:57.141639 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:57.152784 systemd[1]: sshd@17-64.227.106.156:22-147.75.109.163:51950.service: Deactivated successfully. Jan 16 08:58:57.155914 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 08:58:57.159097 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 16 08:58:57.166283 systemd[1]: Started sshd@18-64.227.106.156:22-147.75.109.163:51954.service - OpenSSH per-connection server daemon (147.75.109.163:51954). Jan 16 08:58:57.169564 systemd-logind[1448]: Removed session 18. Jan 16 08:58:57.230986 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 51954 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:58:57.232074 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:57.237437 systemd-logind[1448]: New session 19 of user core. Jan 16 08:58:57.245490 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 08:58:57.379741 sshd[4084]: Connection closed by 147.75.109.163 port 51954 Jan 16 08:58:57.381613 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:57.385215 systemd[1]: sshd@18-64.227.106.156:22-147.75.109.163:51954.service: Deactivated successfully. Jan 16 08:58:57.388237 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 08:58:57.390736 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 16 08:58:57.392428 systemd-logind[1448]: Removed session 19. Jan 16 08:59:00.275488 kubelet[2537]: E0116 08:59:00.274577 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:02.411449 systemd[1]: Started sshd@19-64.227.106.156:22-147.75.109.163:37292.service - OpenSSH per-connection server daemon (147.75.109.163:37292). Jan 16 08:59:02.501884 sshd[4095]: Accepted publickey for core from 147.75.109.163 port 37292 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:02.506262 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:02.513844 systemd-logind[1448]: New session 20 of user core. Jan 16 08:59:02.522348 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 08:59:02.759606 sshd[4097]: Connection closed by 147.75.109.163 port 37292 Jan 16 08:59:02.760532 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:02.770350 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 16 08:59:02.771297 systemd[1]: sshd@19-64.227.106.156:22-147.75.109.163:37292.service: Deactivated successfully. Jan 16 08:59:02.776613 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 08:59:02.779436 systemd-logind[1448]: Removed session 20. Jan 16 08:59:07.274283 kubelet[2537]: E0116 08:59:07.274224 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:07.773826 systemd[1]: Started sshd@20-64.227.106.156:22-147.75.109.163:40198.service - OpenSSH per-connection server daemon (147.75.109.163:40198). Jan 16 08:59:07.842631 sshd[4111]: Accepted publickey for core from 147.75.109.163 port 40198 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:07.844996 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:07.852328 systemd-logind[1448]: New session 21 of user core. Jan 16 08:59:07.862297 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 08:59:07.994707 sshd[4113]: Connection closed by 147.75.109.163 port 40198 Jan 16 08:59:07.995210 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:08.002393 systemd[1]: sshd@20-64.227.106.156:22-147.75.109.163:40198.service: Deactivated successfully. Jan 16 08:59:08.007240 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 08:59:08.009585 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 16 08:59:08.010900 systemd-logind[1448]: Removed session 21. Jan 16 08:59:13.019365 systemd[1]: Started sshd@21-64.227.106.156:22-147.75.109.163:40212.service - OpenSSH per-connection server daemon (147.75.109.163:40212). Jan 16 08:59:13.080999 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 40212 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:13.082374 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:13.087533 systemd-logind[1448]: New session 22 of user core. Jan 16 08:59:13.094299 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 08:59:13.264624 sshd[4125]: Connection closed by 147.75.109.163 port 40212 Jan 16 08:59:13.265789 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:13.270527 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 16 08:59:13.271572 systemd[1]: sshd@21-64.227.106.156:22-147.75.109.163:40212.service: Deactivated successfully. Jan 16 08:59:13.274838 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 08:59:13.277262 systemd-logind[1448]: Removed session 22. Jan 16 08:59:18.284328 systemd[1]: Started sshd@22-64.227.106.156:22-147.75.109.163:50850.service - OpenSSH per-connection server daemon (147.75.109.163:50850). Jan 16 08:59:18.343990 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 50850 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:18.345484 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:18.351368 systemd-logind[1448]: New session 23 of user core. Jan 16 08:59:18.363364 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 08:59:18.499810 sshd[4138]: Connection closed by 147.75.109.163 port 50850 Jan 16 08:59:18.500406 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:18.511771 systemd[1]: sshd@22-64.227.106.156:22-147.75.109.163:50850.service: Deactivated successfully. Jan 16 08:59:18.515161 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 08:59:18.518095 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 16 08:59:18.525410 systemd[1]: Started sshd@23-64.227.106.156:22-147.75.109.163:50856.service - OpenSSH per-connection server daemon (147.75.109.163:50856). Jan 16 08:59:18.527037 systemd-logind[1448]: Removed session 23. Jan 16 08:59:18.593330 sshd[4150]: Accepted publickey for core from 147.75.109.163 port 50856 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:18.594446 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:18.605241 systemd-logind[1448]: New session 24 of user core. Jan 16 08:59:18.612289 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 08:59:20.500377 systemd[1]: run-containerd-runc-k8s.io-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060-runc.y8W8Sv.mount: Deactivated successfully. Jan 16 08:59:20.504796 containerd[1468]: time="2025-01-16T08:59:20.503150266Z" level=info msg="StopContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" with timeout 30 (s)" Jan 16 08:59:20.507981 containerd[1468]: time="2025-01-16T08:59:20.505877430Z" level=info msg="Stop container \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" with signal terminated" Jan 16 08:59:20.526816 containerd[1468]: time="2025-01-16T08:59:20.526747054Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 08:59:20.529427 systemd[1]: cri-containerd-5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a.scope: Deactivated successfully. Jan 16 08:59:20.543545 containerd[1468]: time="2025-01-16T08:59:20.543166065Z" level=info msg="StopContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" with timeout 2 (s)" Jan 16 08:59:20.545326 containerd[1468]: time="2025-01-16T08:59:20.545088053Z" level=info msg="Stop container \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" with signal terminated" Jan 16 08:59:20.566004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a-rootfs.mount: Deactivated successfully. Jan 16 08:59:20.572428 systemd-networkd[1376]: lxc_health: Link DOWN Jan 16 08:59:20.572434 systemd-networkd[1376]: lxc_health: Lost carrier Jan 16 08:59:20.591753 containerd[1468]: time="2025-01-16T08:59:20.591609626Z" level=info msg="shim disconnected" id=5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a namespace=k8s.io Jan 16 08:59:20.592065 containerd[1468]: time="2025-01-16T08:59:20.592044087Z" level=warning msg="cleaning up after shim disconnected" id=5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a namespace=k8s.io Jan 16 08:59:20.592133 containerd[1468]: time="2025-01-16T08:59:20.592122014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:20.596412 systemd[1]: cri-containerd-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060.scope: Deactivated successfully. Jan 16 08:59:20.596635 systemd[1]: cri-containerd-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060.scope: Consumed 8.484s CPU time. Jan 16 08:59:20.620007 containerd[1468]: time="2025-01-16T08:59:20.619608994Z" level=warning msg="cleanup warnings time=\"2025-01-16T08:59:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 08:59:20.626071 containerd[1468]: time="2025-01-16T08:59:20.625922138Z" level=info msg="StopContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" returns successfully" Jan 16 08:59:20.627569 containerd[1468]: time="2025-01-16T08:59:20.627370996Z" level=info msg="StopPodSandbox for \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\"" Jan 16 08:59:20.631979 containerd[1468]: time="2025-01-16T08:59:20.629894124Z" level=info msg="Container to stop \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.637603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78-shm.mount: Deactivated successfully. Jan 16 08:59:20.652120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060-rootfs.mount: Deactivated successfully. Jan 16 08:59:20.662249 systemd[1]: cri-containerd-94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78.scope: Deactivated successfully. Jan 16 08:59:20.665649 containerd[1468]: time="2025-01-16T08:59:20.665344797Z" level=info msg="shim disconnected" id=f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060 namespace=k8s.io Jan 16 08:59:20.665649 containerd[1468]: time="2025-01-16T08:59:20.665421345Z" level=warning msg="cleaning up after shim disconnected" id=f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060 namespace=k8s.io Jan 16 08:59:20.665649 containerd[1468]: time="2025-01-16T08:59:20.665433582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:20.728777 containerd[1468]: time="2025-01-16T08:59:20.728640539Z" level=info msg="StopContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" returns successfully" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729612065Z" level=info msg="StopPodSandbox for \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\"" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729667445Z" level=info msg="Container to stop \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729711087Z" level=info msg="Container to stop \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729724055Z" level=info msg="Container to stop \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729735809Z" level=info msg="Container to stop \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.730144 containerd[1468]: time="2025-01-16T08:59:20.729761887Z" level=info msg="Container to stop \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 08:59:20.738895 containerd[1468]: time="2025-01-16T08:59:20.738617637Z" level=info msg="shim disconnected" id=94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78 namespace=k8s.io Jan 16 08:59:20.738895 containerd[1468]: time="2025-01-16T08:59:20.738691585Z" level=warning msg="cleaning up after shim disconnected" id=94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78 namespace=k8s.io Jan 16 08:59:20.738895 containerd[1468]: time="2025-01-16T08:59:20.738703383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:20.753655 systemd[1]: cri-containerd-7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90.scope: Deactivated successfully. Jan 16 08:59:20.782140 containerd[1468]: time="2025-01-16T08:59:20.781903672Z" level=info msg="TearDown network for sandbox \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\" successfully" Jan 16 08:59:20.782140 containerd[1468]: time="2025-01-16T08:59:20.782030063Z" level=info msg="StopPodSandbox for \"94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78\" returns successfully" Jan 16 08:59:20.805684 containerd[1468]: time="2025-01-16T08:59:20.805584428Z" level=info msg="shim disconnected" id=7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90 namespace=k8s.io Jan 16 08:59:20.805950 containerd[1468]: time="2025-01-16T08:59:20.805777065Z" level=warning msg="cleaning up after shim disconnected" id=7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90 namespace=k8s.io Jan 16 08:59:20.805950 containerd[1468]: time="2025-01-16T08:59:20.805789417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:20.827133 containerd[1468]: time="2025-01-16T08:59:20.826216178Z" level=info msg="TearDown network for sandbox \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" successfully" Jan 16 08:59:20.827133 containerd[1468]: time="2025-01-16T08:59:20.826999250Z" level=info msg="StopPodSandbox for \"7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90\" returns successfully" Jan 16 08:59:20.864683 kubelet[2537]: I0116 08:59:20.864201 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2abbf928-a43b-436c-9638-d9f434aa963f-cilium-config-path\") pod \"2abbf928-a43b-436c-9638-d9f434aa963f\" (UID: \"2abbf928-a43b-436c-9638-d9f434aa963f\") " Jan 16 08:59:20.864683 kubelet[2537]: I0116 08:59:20.864296 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8b2v\" (UniqueName: \"kubernetes.io/projected/2abbf928-a43b-436c-9638-d9f434aa963f-kube-api-access-r8b2v\") pod \"2abbf928-a43b-436c-9638-d9f434aa963f\" (UID: \"2abbf928-a43b-436c-9638-d9f434aa963f\") " Jan 16 08:59:20.871602 kubelet[2537]: I0116 08:59:20.871519 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2abbf928-a43b-436c-9638-d9f434aa963f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2abbf928-a43b-436c-9638-d9f434aa963f" (UID: "2abbf928-a43b-436c-9638-d9f434aa963f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 08:59:20.873425 kubelet[2537]: I0116 08:59:20.873353 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abbf928-a43b-436c-9638-d9f434aa963f-kube-api-access-r8b2v" (OuterVolumeSpecName: "kube-api-access-r8b2v") pod "2abbf928-a43b-436c-9638-d9f434aa963f" (UID: "2abbf928-a43b-436c-9638-d9f434aa963f"). InnerVolumeSpecName "kube-api-access-r8b2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 08:59:20.965169 kubelet[2537]: I0116 08:59:20.965085 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-cgroup\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965169 kubelet[2537]: I0116 08:59:20.965158 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-config-path\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965169 kubelet[2537]: I0116 08:59:20.965177 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-kernel\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965169 kubelet[2537]: I0116 08:59:20.965192 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-xtables-lock\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965209 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-hostproc\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965223 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-etc-cni-netd\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965240 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-hubble-tls\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965256 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7mn\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-kube-api-access-cv7mn\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965271 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-net\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965510 kubelet[2537]: I0116 08:59:20.965286 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-lib-modules\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965299 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-run\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965315 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965333 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cni-path\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965352 2537 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-bpf-maps\") pod \"af5700ec-8778-4db6-9014-9cef58e0ee89\" (UID: \"af5700ec-8778-4db6-9014-9cef58e0ee89\") " Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965395 2537 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r8b2v\" (UniqueName: \"kubernetes.io/projected/2abbf928-a43b-436c-9638-d9f434aa963f-kube-api-access-r8b2v\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:20.965749 kubelet[2537]: I0116 08:59:20.965406 2537 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2abbf928-a43b-436c-9638-d9f434aa963f-cilium-config-path\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:20.966093 kubelet[2537]: I0116 08:59:20.965466 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.966093 kubelet[2537]: I0116 08:59:20.965503 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.969488 kubelet[2537]: I0116 08:59:20.968292 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 08:59:20.969488 kubelet[2537]: I0116 08:59:20.968362 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.969488 kubelet[2537]: I0116 08:59:20.968380 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.969488 kubelet[2537]: I0116 08:59:20.968426 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.970493 kubelet[2537]: I0116 08:59:20.970443 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-kube-api-access-cv7mn" (OuterVolumeSpecName: "kube-api-access-cv7mn") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "kube-api-access-cv7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 08:59:20.970682 kubelet[2537]: I0116 08:59:20.970664 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.970786 kubelet[2537]: I0116 08:59:20.970764 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.970871 kubelet[2537]: I0116 08:59:20.970858 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-hostproc" (OuterVolumeSpecName: "hostproc") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.971101 kubelet[2537]: I0116 08:59:20.971082 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.971845 kubelet[2537]: I0116 08:59:20.971810 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 16 08:59:20.971979 kubelet[2537]: I0116 08:59:20.971867 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cni-path" (OuterVolumeSpecName: "cni-path") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 08:59:20.975043 kubelet[2537]: I0116 08:59:20.974992 2537 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af5700ec-8778-4db6-9014-9cef58e0ee89" (UID: "af5700ec-8778-4db6-9014-9cef58e0ee89"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 08:59:21.067295 kubelet[2537]: I0116 08:59:21.066056 2537 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cni-path\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.067609 kubelet[2537]: I0116 08:59:21.067584 2537 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-bpf-maps\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.067751 kubelet[2537]: I0116 08:59:21.067727 2537 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-cgroup\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.067830 kubelet[2537]: I0116 08:59:21.067815 2537 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-config-path\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068042 kubelet[2537]: I0116 08:59:21.068016 2537 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-kernel\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068170 2537 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-xtables-lock\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068192 2537 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-hostproc\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068205 2537 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-etc-cni-netd\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068216 2537 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-hubble-tls\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068230 2537 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cv7mn\" (UniqueName: \"kubernetes.io/projected/af5700ec-8778-4db6-9014-9cef58e0ee89-kube-api-access-cv7mn\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068244 2537 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-host-proc-sys-net\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068256 2537 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-lib-modules\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068308 kubelet[2537]: I0116 08:59:21.068269 2537 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af5700ec-8778-4db6-9014-9cef58e0ee89-cilium-run\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.068630 kubelet[2537]: I0116 08:59:21.068280 2537 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af5700ec-8778-4db6-9014-9cef58e0ee89-clustermesh-secrets\") on node \"ci-4152.2.0-e-9b059e58c2\" DevicePath \"\"" Jan 16 08:59:21.489573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90-rootfs.mount: Deactivated successfully. Jan 16 08:59:21.489737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e2d5f13f8add29633e3076f80930ab838f552e43bc94be51ed6aabe56a34c90-shm.mount: Deactivated successfully. Jan 16 08:59:21.489853 systemd[1]: var-lib-kubelet-pods-af5700ec\x2d8778\x2d4db6\x2d9014\x2d9cef58e0ee89-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 16 08:59:21.490022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94c7ac535cb7ab26ccfc426e03f98bafa3eb3a1c491a98a5cef0f307f4f3bf78-rootfs.mount: Deactivated successfully. Jan 16 08:59:21.490115 systemd[1]: var-lib-kubelet-pods-2abbf928\x2da43b\x2d436c\x2d9638\x2dd9f434aa963f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr8b2v.mount: Deactivated successfully. Jan 16 08:59:21.490185 systemd[1]: var-lib-kubelet-pods-af5700ec\x2d8778\x2d4db6\x2d9014\x2d9cef58e0ee89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcv7mn.mount: Deactivated successfully. Jan 16 08:59:21.490256 systemd[1]: var-lib-kubelet-pods-af5700ec\x2d8778\x2d4db6\x2d9014\x2d9cef58e0ee89-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 16 08:59:21.598770 kubelet[2537]: I0116 08:59:21.597166 2537 scope.go:117] "RemoveContainer" containerID="f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060" Jan 16 08:59:21.607054 systemd[1]: Removed slice kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice - libcontainer container kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice. Jan 16 08:59:21.607227 systemd[1]: kubepods-burstable-podaf5700ec_8778_4db6_9014_9cef58e0ee89.slice: Consumed 8.595s CPU time. Jan 16 08:59:21.613119 systemd[1]: Removed slice kubepods-besteffort-pod2abbf928_a43b_436c_9638_d9f434aa963f.slice - libcontainer container kubepods-besteffort-pod2abbf928_a43b_436c_9638_d9f434aa963f.slice. Jan 16 08:59:21.617968 containerd[1468]: time="2025-01-16T08:59:21.617813890Z" level=info msg="RemoveContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\"" Jan 16 08:59:21.626463 containerd[1468]: time="2025-01-16T08:59:21.626326872Z" level=info msg="RemoveContainer for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" returns successfully" Jan 16 08:59:21.628762 kubelet[2537]: I0116 08:59:21.627135 2537 scope.go:117] "RemoveContainer" containerID="2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce" Jan 16 08:59:21.630219 containerd[1468]: time="2025-01-16T08:59:21.629963009Z" level=info msg="RemoveContainer for \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\"" Jan 16 08:59:21.634878 containerd[1468]: time="2025-01-16T08:59:21.634825276Z" level=info msg="RemoveContainer for \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\" returns successfully" Jan 16 08:59:21.635952 kubelet[2537]: I0116 08:59:21.635894 2537 scope.go:117] "RemoveContainer" containerID="7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5" Jan 16 08:59:21.637528 containerd[1468]: time="2025-01-16T08:59:21.637488182Z" level=info msg="RemoveContainer for \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\"" Jan 16 08:59:21.650974 containerd[1468]: time="2025-01-16T08:59:21.649844025Z" level=info msg="RemoveContainer for \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\" returns successfully" Jan 16 08:59:21.651375 kubelet[2537]: I0116 08:59:21.651347 2537 scope.go:117] "RemoveContainer" containerID="ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069" Jan 16 08:59:21.658141 containerd[1468]: time="2025-01-16T08:59:21.658098186Z" level=info msg="RemoveContainer for \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\"" Jan 16 08:59:21.661655 containerd[1468]: time="2025-01-16T08:59:21.661593229Z" level=info msg="RemoveContainer for \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\" returns successfully" Jan 16 08:59:21.662044 kubelet[2537]: I0116 08:59:21.661971 2537 scope.go:117] "RemoveContainer" containerID="145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa" Jan 16 08:59:21.666149 containerd[1468]: time="2025-01-16T08:59:21.665010600Z" level=info msg="RemoveContainer for \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\"" Jan 16 08:59:21.668584 containerd[1468]: time="2025-01-16T08:59:21.668538593Z" level=info msg="RemoveContainer for \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\" returns successfully" Jan 16 08:59:21.669181 kubelet[2537]: I0116 08:59:21.669034 2537 scope.go:117] "RemoveContainer" containerID="f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060" Jan 16 08:59:21.671065 containerd[1468]: time="2025-01-16T08:59:21.670999566Z" level=error msg="ContainerStatus for \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\": not found" Jan 16 08:59:21.671498 kubelet[2537]: E0116 08:59:21.671469 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\": not found" containerID="f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060" Jan 16 08:59:21.672636 kubelet[2537]: I0116 08:59:21.672021 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060"} err="failed to get container status \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3796d75fae548d85339fbeebb0244c45977e6677cf02c3e4562c9a13c85e060\": not found" Jan 16 08:59:21.672636 kubelet[2537]: I0116 08:59:21.672173 2537 scope.go:117] "RemoveContainer" containerID="2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce" Jan 16 08:59:21.672802 containerd[1468]: time="2025-01-16T08:59:21.672484249Z" level=error msg="ContainerStatus for \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\": not found" Jan 16 08:59:21.674956 kubelet[2537]: E0116 08:59:21.674185 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\": not found" containerID="2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce" Jan 16 08:59:21.674956 kubelet[2537]: I0116 08:59:21.674228 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce"} err="failed to get container status \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fbb22a62b087d34743065ea0e437758b6c3b5c0eb73598c8dba845f287023ce\": not found" Jan 16 08:59:21.674956 kubelet[2537]: I0116 08:59:21.674258 2537 scope.go:117] "RemoveContainer" containerID="7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5" Jan 16 08:59:21.675171 containerd[1468]: time="2025-01-16T08:59:21.674711098Z" level=error msg="ContainerStatus for \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\": not found" Jan 16 08:59:21.676910 kubelet[2537]: E0116 08:59:21.676158 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\": not found" containerID="7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5" Jan 16 08:59:21.676910 kubelet[2537]: I0116 08:59:21.676196 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5"} err="failed to get container status \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c1ef1a4ea03c7fde7df5339761e90ab1e0fbd975423f4a1dcd6ace07c609ac5\": not found" Jan 16 08:59:21.676910 kubelet[2537]: I0116 08:59:21.676237 2537 scope.go:117] "RemoveContainer" containerID="ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069" Jan 16 08:59:21.676910 kubelet[2537]: E0116 08:59:21.676674 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\": not found" containerID="ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069" Jan 16 08:59:21.676910 kubelet[2537]: I0116 08:59:21.676698 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069"} err="failed to get container status \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\": not found" Jan 16 08:59:21.676910 kubelet[2537]: I0116 08:59:21.676721 2537 scope.go:117] "RemoveContainer" containerID="145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa" Jan 16 08:59:21.677273 containerd[1468]: time="2025-01-16T08:59:21.676530000Z" level=error msg="ContainerStatus for \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae0a72f901fa4fad5f4ddc49e23f2d6b872c1663207f234df0ee46725be7a069\": not found" Jan 16 08:59:21.679168 containerd[1468]: time="2025-01-16T08:59:21.677775582Z" level=error msg="ContainerStatus for \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\": not found" Jan 16 08:59:21.679634 kubelet[2537]: E0116 08:59:21.679095 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\": not found" containerID="145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa" Jan 16 08:59:21.679634 kubelet[2537]: I0116 08:59:21.679371 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa"} err="failed to get container status \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"145b7cc3ad037c885fae61dcf5387d3390f49a783e0cb6188d77adcd924038fa\": not found" Jan 16 08:59:21.679634 kubelet[2537]: I0116 08:59:21.679416 2537 scope.go:117] "RemoveContainer" containerID="5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a" Jan 16 08:59:21.686132 containerd[1468]: time="2025-01-16T08:59:21.685735347Z" level=info msg="RemoveContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\"" Jan 16 08:59:21.694061 containerd[1468]: time="2025-01-16T08:59:21.693670066Z" level=info msg="RemoveContainer for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" returns successfully" Jan 16 08:59:21.696254 kubelet[2537]: I0116 08:59:21.694398 2537 scope.go:117] "RemoveContainer" containerID="5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a" Jan 16 08:59:21.697190 containerd[1468]: time="2025-01-16T08:59:21.696659586Z" level=error msg="ContainerStatus for \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\": not found" Jan 16 08:59:21.697640 kubelet[2537]: E0116 08:59:21.697503 2537 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\": not found" containerID="5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a" Jan 16 08:59:21.697933 kubelet[2537]: I0116 08:59:21.697545 2537 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a"} err="failed to get container status \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b747688873f90f3408f96f03dda5b296ffc984b9323f9eb1f65d8fa5e3eff4a\": not found" Jan 16 08:59:22.277412 kubelet[2537]: I0116 08:59:22.276248 2537 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2abbf928-a43b-436c-9638-d9f434aa963f" path="/var/lib/kubelet/pods/2abbf928-a43b-436c-9638-d9f434aa963f/volumes" Jan 16 08:59:22.277412 kubelet[2537]: I0116 08:59:22.276918 2537 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" path="/var/lib/kubelet/pods/af5700ec-8778-4db6-9014-9cef58e0ee89/volumes" Jan 16 08:59:22.357904 sshd[4152]: Connection closed by 147.75.109.163 port 50856 Jan 16 08:59:22.359073 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:22.370645 systemd[1]: sshd@23-64.227.106.156:22-147.75.109.163:50856.service: Deactivated successfully. Jan 16 08:59:22.374850 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 08:59:22.375576 systemd[1]: session-24.scope: Consumed 1.106s CPU time. Jan 16 08:59:22.378194 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 16 08:59:22.385403 systemd[1]: Started sshd@24-64.227.106.156:22-147.75.109.163:50868.service - OpenSSH per-connection server daemon (147.75.109.163:50868). Jan 16 08:59:22.390199 systemd-logind[1448]: Removed session 24. Jan 16 08:59:22.485807 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 50868 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:22.487991 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:22.494759 systemd-logind[1448]: New session 25 of user core. Jan 16 08:59:22.499170 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 08:59:23.378018 sshd[4312]: Connection closed by 147.75.109.163 port 50868 Jan 16 08:59:23.380596 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:23.393388 systemd[1]: sshd@24-64.227.106.156:22-147.75.109.163:50868.service: Deactivated successfully. Jan 16 08:59:23.396904 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 08:59:23.401891 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 16 08:59:23.407622 kubelet[2537]: E0116 08:59:23.407454 2537 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413502 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2abbf928-a43b-436c-9638-d9f434aa963f" containerName="cilium-operator" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413531 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="mount-bpf-fs" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413538 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="clean-cilium-state" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413545 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="cilium-agent" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413554 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="mount-cgroup" Jan 16 08:59:23.414583 kubelet[2537]: E0116 08:59:23.413562 2537 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="apply-sysctl-overwrites" Jan 16 08:59:23.414583 kubelet[2537]: I0116 08:59:23.413607 2537 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abbf928-a43b-436c-9638-d9f434aa963f" containerName="cilium-operator" Jan 16 08:59:23.414583 kubelet[2537]: I0116 08:59:23.413617 2537 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5700ec-8778-4db6-9014-9cef58e0ee89" containerName="cilium-agent" Jan 16 08:59:23.415303 systemd[1]: Started sshd@25-64.227.106.156:22-147.75.109.163:50874.service - OpenSSH per-connection server daemon (147.75.109.163:50874). Jan 16 08:59:23.422144 systemd-logind[1448]: Removed session 25. Jan 16 08:59:23.438960 systemd[1]: Created slice kubepods-burstable-podee1096bc_27ac_4a0d_96ef_54b657589d9e.slice - libcontainer container kubepods-burstable-podee1096bc_27ac_4a0d_96ef_54b657589d9e.slice. Jan 16 08:59:23.486645 kubelet[2537]: I0116 08:59:23.486438 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee1096bc-27ac-4a0d-96ef-54b657589d9e-clustermesh-secrets\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.486987 kubelet[2537]: I0116 08:59:23.486965 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee1096bc-27ac-4a0d-96ef-54b657589d9e-cilium-config-path\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.487114 kubelet[2537]: I0116 08:59:23.487089 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee1096bc-27ac-4a0d-96ef-54b657589d9e-cilium-ipsec-secrets\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489198 kubelet[2537]: I0116 08:59:23.489072 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-cilium-cgroup\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489198 kubelet[2537]: I0116 08:59:23.489138 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-etc-cni-netd\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489198 kubelet[2537]: I0116 08:59:23.489155 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-hostproc\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489512 kubelet[2537]: I0116 08:59:23.489177 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee1096bc-27ac-4a0d-96ef-54b657589d9e-hubble-tls\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489512 kubelet[2537]: I0116 08:59:23.489374 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-cni-path\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489512 kubelet[2537]: I0116 08:59:23.489392 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-lib-modules\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489414 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-host-proc-sys-kernel\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489629 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-bpf-maps\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489655 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-host-proc-sys-net\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489678 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-cilium-run\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489695 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee1096bc-27ac-4a0d-96ef-54b657589d9e-xtables-lock\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.489839 kubelet[2537]: I0116 08:59:23.489710 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58mz\" (UniqueName: \"kubernetes.io/projected/ee1096bc-27ac-4a0d-96ef-54b657589d9e-kube-api-access-q58mz\") pod \"cilium-xcmgf\" (UID: \"ee1096bc-27ac-4a0d-96ef-54b657589d9e\") " pod="kube-system/cilium-xcmgf" Jan 16 08:59:23.492716 sshd[4321]: Accepted publickey for core from 147.75.109.163 port 50874 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:23.495746 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:23.503596 systemd-logind[1448]: New session 26 of user core. Jan 16 08:59:23.507203 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 08:59:23.576141 sshd[4323]: Connection closed by 147.75.109.163 port 50874 Jan 16 08:59:23.574078 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:23.590406 systemd[1]: sshd@25-64.227.106.156:22-147.75.109.163:50874.service: Deactivated successfully. Jan 16 08:59:23.598056 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 08:59:23.601147 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 16 08:59:23.618728 systemd[1]: Started sshd@26-64.227.106.156:22-147.75.109.163:50876.service - OpenSSH per-connection server daemon (147.75.109.163:50876). Jan 16 08:59:23.647730 systemd-logind[1448]: Removed session 26. Jan 16 08:59:23.710290 sshd[4331]: Accepted publickey for core from 147.75.109.163 port 50876 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 08:59:23.712460 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:59:23.719981 systemd-logind[1448]: New session 27 of user core. Jan 16 08:59:23.728278 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 08:59:23.747223 kubelet[2537]: E0116 08:59:23.747149 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:23.749689 containerd[1468]: time="2025-01-16T08:59:23.749617353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcmgf,Uid:ee1096bc-27ac-4a0d-96ef-54b657589d9e,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:23.786792 containerd[1468]: time="2025-01-16T08:59:23.786438371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:23.786792 containerd[1468]: time="2025-01-16T08:59:23.786516887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:23.786792 containerd[1468]: time="2025-01-16T08:59:23.786535063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:23.786792 containerd[1468]: time="2025-01-16T08:59:23.786641435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:23.826658 systemd[1]: Started cri-containerd-39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd.scope - libcontainer container 39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd. Jan 16 08:59:23.877629 containerd[1468]: time="2025-01-16T08:59:23.877538529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcmgf,Uid:ee1096bc-27ac-4a0d-96ef-54b657589d9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\"" Jan 16 08:59:23.880529 kubelet[2537]: E0116 08:59:23.880306 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:23.889110 containerd[1468]: time="2025-01-16T08:59:23.889060250Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 08:59:23.918979 containerd[1468]: time="2025-01-16T08:59:23.918455808Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc\"" Jan 16 08:59:23.924219 containerd[1468]: time="2025-01-16T08:59:23.921138594Z" level=info msg="StartContainer for \"aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc\"" Jan 16 08:59:23.961906 systemd[1]: Started cri-containerd-aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc.scope - libcontainer container aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc. Jan 16 08:59:24.007907 containerd[1468]: time="2025-01-16T08:59:24.007729548Z" level=info msg="StartContainer for \"aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc\" returns successfully" Jan 16 08:59:24.027349 systemd[1]: cri-containerd-aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc.scope: Deactivated successfully. Jan 16 08:59:24.076986 containerd[1468]: time="2025-01-16T08:59:24.076546909Z" level=info msg="shim disconnected" id=aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc namespace=k8s.io Jan 16 08:59:24.076986 containerd[1468]: time="2025-01-16T08:59:24.076857991Z" level=warning msg="cleaning up after shim disconnected" id=aabebb35905323da8b3f89975c4bcdb375ceff109c8c5d0be9324d7c4869d4fc namespace=k8s.io Jan 16 08:59:24.076986 containerd[1468]: time="2025-01-16T08:59:24.076870143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:24.670170 kubelet[2537]: E0116 08:59:24.670109 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:24.676542 containerd[1468]: time="2025-01-16T08:59:24.676120584Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 08:59:24.714213 containerd[1468]: time="2025-01-16T08:59:24.714135478Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297\"" Jan 16 08:59:24.714843 containerd[1468]: time="2025-01-16T08:59:24.714815635Z" level=info msg="StartContainer for \"616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297\"" Jan 16 08:59:24.771298 systemd[1]: Started cri-containerd-616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297.scope - libcontainer container 616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297. Jan 16 08:59:24.823413 containerd[1468]: time="2025-01-16T08:59:24.823351182Z" level=info msg="StartContainer for \"616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297\" returns successfully" Jan 16 08:59:24.833581 systemd[1]: cri-containerd-616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297.scope: Deactivated successfully. Jan 16 08:59:24.876391 containerd[1468]: time="2025-01-16T08:59:24.876120550Z" level=info msg="shim disconnected" id=616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297 namespace=k8s.io Jan 16 08:59:24.877267 containerd[1468]: time="2025-01-16T08:59:24.876905410Z" level=warning msg="cleaning up after shim disconnected" id=616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297 namespace=k8s.io Jan 16 08:59:24.877267 containerd[1468]: time="2025-01-16T08:59:24.876965787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:25.622330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-616c5feb3518e8520e4650624de1be7f06ffe0793c311e7725ba4461b1cb6297-rootfs.mount: Deactivated successfully. Jan 16 08:59:25.679369 kubelet[2537]: E0116 08:59:25.677055 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:25.683848 containerd[1468]: time="2025-01-16T08:59:25.682400392Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 08:59:25.716584 containerd[1468]: time="2025-01-16T08:59:25.716532594Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c\"" Jan 16 08:59:25.720798 containerd[1468]: time="2025-01-16T08:59:25.717448277Z" level=info msg="StartContainer for \"5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c\"" Jan 16 08:59:25.774258 systemd[1]: Started cri-containerd-5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c.scope - libcontainer container 5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c. Jan 16 08:59:25.825657 containerd[1468]: time="2025-01-16T08:59:25.825570311Z" level=info msg="StartContainer for \"5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c\" returns successfully" Jan 16 08:59:25.833012 systemd[1]: cri-containerd-5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c.scope: Deactivated successfully. Jan 16 08:59:25.874562 containerd[1468]: time="2025-01-16T08:59:25.874377957Z" level=info msg="shim disconnected" id=5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c namespace=k8s.io Jan 16 08:59:25.874562 containerd[1468]: time="2025-01-16T08:59:25.874456463Z" level=warning msg="cleaning up after shim disconnected" id=5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c namespace=k8s.io Jan 16 08:59:25.874562 containerd[1468]: time="2025-01-16T08:59:25.874474136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:26.277271 kubelet[2537]: E0116 08:59:26.276555 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:26.621893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5046ed84a24a946c5b27e015dcc1f663f2c1abd9d7c282efb76bbce0a5666e5c-rootfs.mount: Deactivated successfully. Jan 16 08:59:26.687409 kubelet[2537]: E0116 08:59:26.687356 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:26.693857 containerd[1468]: time="2025-01-16T08:59:26.693799751Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 08:59:26.725624 containerd[1468]: time="2025-01-16T08:59:26.725556805Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e\"" Jan 16 08:59:26.729160 containerd[1468]: time="2025-01-16T08:59:26.728869907Z" level=info msg="StartContainer for \"c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e\"" Jan 16 08:59:26.779296 systemd[1]: Started cri-containerd-c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e.scope - libcontainer container c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e. Jan 16 08:59:26.817056 systemd[1]: cri-containerd-c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e.scope: Deactivated successfully. Jan 16 08:59:26.820542 containerd[1468]: time="2025-01-16T08:59:26.820403728Z" level=info msg="StartContainer for \"c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e\" returns successfully" Jan 16 08:59:26.863411 containerd[1468]: time="2025-01-16T08:59:26.863156970Z" level=info msg="shim disconnected" id=c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e namespace=k8s.io Jan 16 08:59:26.863411 containerd[1468]: time="2025-01-16T08:59:26.863215748Z" level=warning msg="cleaning up after shim disconnected" id=c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e namespace=k8s.io Jan 16 08:59:26.863411 containerd[1468]: time="2025-01-16T08:59:26.863226460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:27.274903 kubelet[2537]: E0116 08:59:27.274796 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:27.623443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ffb2c74f1ddba261b28a9a146cb9af35790127dabe43cb120aca4fbe124a6e-rootfs.mount: Deactivated successfully. Jan 16 08:59:27.695900 kubelet[2537]: E0116 08:59:27.695866 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:27.702557 containerd[1468]: time="2025-01-16T08:59:27.701537910Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 08:59:27.733988 containerd[1468]: time="2025-01-16T08:59:27.733858060Z" level=info msg="CreateContainer within sandbox \"39ca2840056659533d55cfcb10b56802bd7c8cff2775081fc18219a0c536dafd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9\"" Jan 16 08:59:27.735496 containerd[1468]: time="2025-01-16T08:59:27.735038708Z" level=info msg="StartContainer for \"16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9\"" Jan 16 08:59:27.787363 systemd[1]: Started cri-containerd-16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9.scope - libcontainer container 16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9. Jan 16 08:59:27.852201 containerd[1468]: time="2025-01-16T08:59:27.852143904Z" level=info msg="StartContainer for \"16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9\" returns successfully" Jan 16 08:59:28.388972 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 16 08:59:28.712298 kubelet[2537]: E0116 08:59:28.711128 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:28.735035 kubelet[2537]: I0116 08:59:28.734896 2537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xcmgf" podStartSLOduration=5.734864848 podStartE2EDuration="5.734864848s" podCreationTimestamp="2025-01-16 08:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:28.734561055 +0000 UTC m=+100.633942261" watchObservedRunningTime="2025-01-16 08:59:28.734864848 +0000 UTC m=+100.634246053" Jan 16 08:59:29.748986 kubelet[2537]: E0116 08:59:29.748851 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:31.979292 systemd-networkd[1376]: lxc_health: Link UP Jan 16 08:59:32.004207 systemd-networkd[1376]: lxc_health: Gained carrier Jan 16 08:59:32.863573 systemd[1]: run-containerd-runc-k8s.io-16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9-runc.yub7vm.mount: Deactivated successfully. Jan 16 08:59:33.701093 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 16 08:59:33.750065 kubelet[2537]: E0116 08:59:33.749818 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:34.727245 kubelet[2537]: E0116 08:59:34.726921 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:35.088347 systemd[1]: run-containerd-runc-k8s.io-16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9-runc.Rr6KFz.mount: Deactivated successfully. Jan 16 08:59:35.729752 kubelet[2537]: E0116 08:59:35.729706 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:37.223365 systemd[1]: run-containerd-runc-k8s.io-16bba5a2428458d5b9b263c0059bd88be7212499e149550ba14a9eeace996fd9-runc.i53peh.mount: Deactivated successfully. Jan 16 08:59:37.290558 sshd[4337]: Connection closed by 147.75.109.163 port 50876 Jan 16 08:59:37.292437 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jan 16 08:59:37.298101 systemd[1]: sshd@26-64.227.106.156:22-147.75.109.163:50876.service: Deactivated successfully. Jan 16 08:59:37.301186 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 08:59:37.304097 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 16 08:59:37.305924 systemd-logind[1448]: Removed session 27. Jan 16 08:59:39.275209 kubelet[2537]: E0116 08:59:39.275138 2537 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"