Nov 13 08:27:16.039115 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 21:10:03 -00 2024 Nov 13 08:27:16.039141 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:16.039156 kernel: BIOS-provided physical RAM map: Nov 13 08:27:16.039164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 13 08:27:16.039171 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 13 08:27:16.039178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 13 08:27:16.039187 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Nov 13 08:27:16.039194 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Nov 13 08:27:16.039202 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 13 08:27:16.039211 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 13 08:27:16.039219 kernel: NX (Execute Disable) protection: active Nov 13 08:27:16.039226 kernel: APIC: Static calls initialized Nov 13 08:27:16.039233 kernel: SMBIOS 2.8 present. Nov 13 08:27:16.039241 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 13 08:27:16.039250 kernel: Hypervisor detected: KVM Nov 13 08:27:16.039261 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 13 08:27:16.039269 kernel: kvm-clock: using sched offset of 3658011876 cycles Nov 13 08:27:16.039278 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 13 08:27:16.039286 kernel: tsc: Detected 1995.312 MHz processor Nov 13 08:27:16.039295 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 13 08:27:16.039303 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 13 08:27:16.039310 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Nov 13 08:27:16.039317 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 13 08:27:16.039324 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 13 08:27:16.039334 kernel: ACPI: Early table checksum verification disabled Nov 13 08:27:16.039341 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Nov 13 08:27:16.039349 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039356 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039362 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039369 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 13 08:27:16.039376 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039383 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039390 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039399 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 13 08:27:16.039406 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 13 08:27:16.039413 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 13 08:27:16.039420 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 13 08:27:16.039426 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 13 08:27:16.039433 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 13 08:27:16.039440 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 13 08:27:16.039454 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 13 08:27:16.039461 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 13 08:27:16.039468 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 13 08:27:16.039475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 13 08:27:16.041761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 13 08:27:16.041772 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Nov 13 08:27:16.041781 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Nov 13 08:27:16.041806 kernel: Zone ranges: Nov 13 08:27:16.041820 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 13 08:27:16.041829 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Nov 13 08:27:16.041837 kernel: Normal empty Nov 13 08:27:16.041846 kernel: Movable zone start for each node Nov 13 08:27:16.041854 kernel: Early memory node ranges Nov 13 08:27:16.041863 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 13 08:27:16.041871 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Nov 13 08:27:16.041880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Nov 13 08:27:16.041891 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 13 08:27:16.041899 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 13 08:27:16.041906 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Nov 13 08:27:16.041915 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 13 08:27:16.041928 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 13 08:27:16.041938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 13 08:27:16.041946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 13 08:27:16.041953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 13 08:27:16.041961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 13 08:27:16.041971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 13 08:27:16.041978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 13 08:27:16.041985 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 13 08:27:16.041993 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 13 08:27:16.042001 kernel: TSC deadline timer available Nov 13 08:27:16.042008 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 13 08:27:16.042015 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 13 08:27:16.042023 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 13 08:27:16.042030 kernel: Booting paravirtualized kernel on KVM Nov 13 08:27:16.042041 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 13 08:27:16.042049 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 13 08:27:16.042056 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 13 08:27:16.042063 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 13 08:27:16.042071 kernel: pcpu-alloc: [0] 0 1 Nov 13 08:27:16.042078 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 13 08:27:16.042093 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:16.042105 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 13 08:27:16.042121 kernel: random: crng init done Nov 13 08:27:16.042134 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 13 08:27:16.042147 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 13 08:27:16.042159 kernel: Fallback order for Node 0: 0 Nov 13 08:27:16.042172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Nov 13 08:27:16.042181 kernel: Policy zone: DMA32 Nov 13 08:27:16.042189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 13 08:27:16.042201 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22736K rodata, 42968K init, 2220K bss, 125148K reserved, 0K cma-reserved) Nov 13 08:27:16.042213 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 13 08:27:16.042230 kernel: Kernel/User page tables isolation: enabled Nov 13 08:27:16.042242 kernel: ftrace: allocating 37801 entries in 148 pages Nov 13 08:27:16.042259 kernel: ftrace: allocated 148 pages with 3 groups Nov 13 08:27:16.042272 kernel: Dynamic Preempt: voluntary Nov 13 08:27:16.042284 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 13 08:27:16.042298 kernel: rcu: RCU event tracing is enabled. Nov 13 08:27:16.042312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 13 08:27:16.042325 kernel: Trampoline variant of Tasks RCU enabled. Nov 13 08:27:16.042338 kernel: Rude variant of Tasks RCU enabled. Nov 13 08:27:16.042356 kernel: Tracing variant of Tasks RCU enabled. Nov 13 08:27:16.042369 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 13 08:27:16.042377 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 13 08:27:16.042385 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 13 08:27:16.042393 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 13 08:27:16.042400 kernel: Console: colour VGA+ 80x25 Nov 13 08:27:16.042412 kernel: printk: console [tty0] enabled Nov 13 08:27:16.042426 kernel: printk: console [ttyS0] enabled Nov 13 08:27:16.042435 kernel: ACPI: Core revision 20230628 Nov 13 08:27:16.042450 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 13 08:27:16.042546 kernel: APIC: Switch to symmetric I/O mode setup Nov 13 08:27:16.042562 kernel: x2apic enabled Nov 13 08:27:16.042574 kernel: APIC: Switched APIC routing to: physical x2apic Nov 13 08:27:16.042586 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 13 08:27:16.042593 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 13 08:27:16.042601 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 13 08:27:16.042611 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 13 08:27:16.042619 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 13 08:27:16.042640 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 13 08:27:16.042648 kernel: Spectre V2 : Mitigation: Retpolines Nov 13 08:27:16.042656 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 13 08:27:16.042667 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 13 08:27:16.042676 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 13 08:27:16.042684 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 13 08:27:16.042692 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 13 08:27:16.042700 kernel: MDS: Mitigation: Clear CPU buffers Nov 13 08:27:16.042709 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 13 08:27:16.042720 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 13 08:27:16.042728 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 13 08:27:16.042736 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 13 08:27:16.042744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 13 08:27:16.042752 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 13 08:27:16.042761 kernel: Freeing SMP alternatives memory: 32K Nov 13 08:27:16.042769 kernel: pid_max: default: 32768 minimum: 301 Nov 13 08:27:16.042777 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 13 08:27:16.042788 kernel: landlock: Up and running. Nov 13 08:27:16.042796 kernel: SELinux: Initializing. Nov 13 08:27:16.042804 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:27:16.042812 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 13 08:27:16.042820 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 13 08:27:16.042828 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:16.042837 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:16.042849 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 13 08:27:16.042867 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 13 08:27:16.042879 kernel: signal: max sigframe size: 1776 Nov 13 08:27:16.042887 kernel: rcu: Hierarchical SRCU implementation. Nov 13 08:27:16.042895 kernel: rcu: Max phase no-delay instances is 400. Nov 13 08:27:16.042903 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 13 08:27:16.042911 kernel: smp: Bringing up secondary CPUs ... Nov 13 08:27:16.042919 kernel: smpboot: x86: Booting SMP configuration: Nov 13 08:27:16.042927 kernel: .... node #0, CPUs: #1 Nov 13 08:27:16.042935 kernel: smp: Brought up 1 node, 2 CPUs Nov 13 08:27:16.042943 kernel: smpboot: Max logical packages: 1 Nov 13 08:27:16.042955 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 13 08:27:16.042963 kernel: devtmpfs: initialized Nov 13 08:27:16.042971 kernel: x86/mm: Memory block size: 128MB Nov 13 08:27:16.042979 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 13 08:27:16.042987 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 13 08:27:16.042995 kernel: pinctrl core: initialized pinctrl subsystem Nov 13 08:27:16.043003 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 13 08:27:16.043012 kernel: audit: initializing netlink subsys (disabled) Nov 13 08:27:16.043020 kernel: audit: type=2000 audit(1731486435.021:1): state=initialized audit_enabled=0 res=1 Nov 13 08:27:16.043031 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 13 08:27:16.043039 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 13 08:27:16.043047 kernel: cpuidle: using governor menu Nov 13 08:27:16.043055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 13 08:27:16.043063 kernel: dca service started, version 1.12.1 Nov 13 08:27:16.043071 kernel: PCI: Using configuration type 1 for base access Nov 13 08:27:16.043079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 13 08:27:16.043088 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 13 08:27:16.043096 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 13 08:27:16.043107 kernel: ACPI: Added _OSI(Module Device) Nov 13 08:27:16.043115 kernel: ACPI: Added _OSI(Processor Device) Nov 13 08:27:16.043123 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 13 08:27:16.043131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 13 08:27:16.043139 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 13 08:27:16.043146 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 13 08:27:16.043154 kernel: ACPI: Interpreter enabled Nov 13 08:27:16.043162 kernel: ACPI: PM: (supports S0 S5) Nov 13 08:27:16.043170 kernel: ACPI: Using IOAPIC for interrupt routing Nov 13 08:27:16.043181 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 13 08:27:16.043190 kernel: PCI: Using E820 reservations for host bridge windows Nov 13 08:27:16.043198 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 13 08:27:16.043206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 13 08:27:16.044555 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 13 08:27:16.044751 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 13 08:27:16.044853 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 13 08:27:16.044872 kernel: acpiphp: Slot [3] registered Nov 13 08:27:16.044882 kernel: acpiphp: Slot [4] registered Nov 13 08:27:16.044890 kernel: acpiphp: Slot [5] registered Nov 13 08:27:16.044899 kernel: acpiphp: Slot [6] registered Nov 13 08:27:16.044907 kernel: acpiphp: Slot [7] registered Nov 13 08:27:16.044915 kernel: acpiphp: Slot [8] registered Nov 13 08:27:16.044923 kernel: acpiphp: Slot [9] registered Nov 13 08:27:16.044931 kernel: acpiphp: Slot [10] registered Nov 13 08:27:16.044940 kernel: acpiphp: Slot [11] registered Nov 13 08:27:16.044951 kernel: acpiphp: Slot [12] registered Nov 13 08:27:16.044959 kernel: acpiphp: Slot [13] registered Nov 13 08:27:16.044967 kernel: acpiphp: Slot [14] registered Nov 13 08:27:16.044974 kernel: acpiphp: Slot [15] registered Nov 13 08:27:16.044982 kernel: acpiphp: Slot [16] registered Nov 13 08:27:16.044990 kernel: acpiphp: Slot [17] registered Nov 13 08:27:16.044998 kernel: acpiphp: Slot [18] registered Nov 13 08:27:16.045006 kernel: acpiphp: Slot [19] registered Nov 13 08:27:16.045014 kernel: acpiphp: Slot [20] registered Nov 13 08:27:16.045022 kernel: acpiphp: Slot [21] registered Nov 13 08:27:16.045033 kernel: acpiphp: Slot [22] registered Nov 13 08:27:16.045041 kernel: acpiphp: Slot [23] registered Nov 13 08:27:16.045049 kernel: acpiphp: Slot [24] registered Nov 13 08:27:16.045058 kernel: acpiphp: Slot [25] registered Nov 13 08:27:16.045066 kernel: acpiphp: Slot [26] registered Nov 13 08:27:16.045074 kernel: acpiphp: Slot [27] registered Nov 13 08:27:16.045081 kernel: acpiphp: Slot [28] registered Nov 13 08:27:16.045089 kernel: acpiphp: Slot [29] registered Nov 13 08:27:16.045098 kernel: acpiphp: Slot [30] registered Nov 13 08:27:16.045108 kernel: acpiphp: Slot [31] registered Nov 13 08:27:16.045116 kernel: PCI host bridge to bus 0000:00 Nov 13 08:27:16.045223 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 13 08:27:16.045313 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 13 08:27:16.045400 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 13 08:27:16.046568 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 13 08:27:16.046707 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 13 08:27:16.046798 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 13 08:27:16.046961 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 13 08:27:16.047069 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 13 08:27:16.047172 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 13 08:27:16.047268 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 13 08:27:16.047363 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 13 08:27:16.047462 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 13 08:27:16.048911 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 13 08:27:16.049078 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 13 08:27:16.049244 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 13 08:27:16.049396 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 13 08:27:16.052164 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 13 08:27:16.052340 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 13 08:27:16.052531 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 13 08:27:16.052674 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 13 08:27:16.052790 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 13 08:27:16.052905 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 13 08:27:16.053001 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 13 08:27:16.053095 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 13 08:27:16.053189 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 13 08:27:16.053305 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:27:16.053400 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 13 08:27:16.053514 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 13 08:27:16.053609 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 13 08:27:16.053712 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 13 08:27:16.053807 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 13 08:27:16.053903 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 13 08:27:16.053997 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 13 08:27:16.054121 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 13 08:27:16.054225 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 13 08:27:16.054321 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 13 08:27:16.054418 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 13 08:27:16.055683 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:27:16.055810 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 13 08:27:16.055908 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 13 08:27:16.056012 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 13 08:27:16.056124 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 13 08:27:16.056221 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 13 08:27:16.056318 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 13 08:27:16.056416 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 13 08:27:16.057614 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 13 08:27:16.057737 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 13 08:27:16.057833 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 13 08:27:16.057846 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 13 08:27:16.057860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 13 08:27:16.057875 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 13 08:27:16.057887 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 13 08:27:16.057906 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 13 08:27:16.057919 kernel: iommu: Default domain type: Translated Nov 13 08:27:16.057932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 13 08:27:16.057945 kernel: PCI: Using ACPI for IRQ routing Nov 13 08:27:16.057958 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 13 08:27:16.057972 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 13 08:27:16.057980 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Nov 13 08:27:16.058089 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 13 08:27:16.058184 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 13 08:27:16.058284 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 13 08:27:16.058294 kernel: vgaarb: loaded Nov 13 08:27:16.058303 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 13 08:27:16.058311 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 13 08:27:16.058319 kernel: clocksource: Switched to clocksource kvm-clock Nov 13 08:27:16.058327 kernel: VFS: Disk quotas dquot_6.6.0 Nov 13 08:27:16.058336 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 13 08:27:16.058354 kernel: pnp: PnP ACPI init Nov 13 08:27:16.058373 kernel: pnp: PnP ACPI: found 4 devices Nov 13 08:27:16.058391 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 13 08:27:16.058400 kernel: NET: Registered PF_INET protocol family Nov 13 08:27:16.058408 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 13 08:27:16.058417 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 13 08:27:16.058425 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 13 08:27:16.058434 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 13 08:27:16.058442 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 13 08:27:16.058455 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 13 08:27:16.058561 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:27:16.058581 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 13 08:27:16.058595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 13 08:27:16.058610 kernel: NET: Registered PF_XDP protocol family Nov 13 08:27:16.058754 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 13 08:27:16.058844 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 13 08:27:16.058929 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 13 08:27:16.059017 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 13 08:27:16.059104 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 13 08:27:16.059216 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 13 08:27:16.059349 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 13 08:27:16.059369 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 13 08:27:16.061620 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40818 usecs Nov 13 08:27:16.061645 kernel: PCI: CLS 0 bytes, default 64 Nov 13 08:27:16.061654 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 13 08:27:16.061663 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 13 08:27:16.061672 kernel: Initialise system trusted keyrings Nov 13 08:27:16.061686 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 13 08:27:16.061695 kernel: Key type asymmetric registered Nov 13 08:27:16.061703 kernel: Asymmetric key parser 'x509' registered Nov 13 08:27:16.061711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 13 08:27:16.061720 kernel: io scheduler mq-deadline registered Nov 13 08:27:16.061729 kernel: io scheduler kyber registered Nov 13 08:27:16.061737 kernel: io scheduler bfq registered Nov 13 08:27:16.061745 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 13 08:27:16.061754 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 13 08:27:16.061766 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 13 08:27:16.061774 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 13 08:27:16.061782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 13 08:27:16.061792 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 13 08:27:16.061806 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 13 08:27:16.061819 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 13 08:27:16.061836 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 13 08:27:16.062102 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 13 08:27:16.062123 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 13 08:27:16.062236 kernel: rtc_cmos 00:03: registered as rtc0 Nov 13 08:27:16.062330 kernel: rtc_cmos 00:03: setting system clock to 2024-11-13T08:27:15 UTC (1731486435) Nov 13 08:27:16.062424 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 13 08:27:16.062435 kernel: intel_pstate: CPU model not supported Nov 13 08:27:16.062443 kernel: NET: Registered PF_INET6 protocol family Nov 13 08:27:16.062451 kernel: Segment Routing with IPv6 Nov 13 08:27:16.062460 kernel: In-situ OAM (IOAM) with IPv6 Nov 13 08:27:16.062589 kernel: NET: Registered PF_PACKET protocol family Nov 13 08:27:16.062603 kernel: Key type dns_resolver registered Nov 13 08:27:16.062611 kernel: IPI shorthand broadcast: enabled Nov 13 08:27:16.062620 kernel: sched_clock: Marking stable (1857004391, 136077302)->(2030069812, -36988119) Nov 13 08:27:16.062628 kernel: registered taskstats version 1 Nov 13 08:27:16.062636 kernel: Loading compiled-in X.509 certificates Nov 13 08:27:16.062644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: d04cb2ddbd5c3ca82936c51f5645ef0dcbdcd3b4' Nov 13 08:27:16.062653 kernel: Key type .fscrypt registered Nov 13 08:27:16.062661 kernel: Key type fscrypt-provisioning registered Nov 13 08:27:16.062669 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 13 08:27:16.062680 kernel: ima: Allocated hash algorithm: sha1 Nov 13 08:27:16.062689 kernel: ima: No architecture policies found Nov 13 08:27:16.062697 kernel: clk: Disabling unused clocks Nov 13 08:27:16.062705 kernel: Freeing unused kernel image (initmem) memory: 42968K Nov 13 08:27:16.062713 kernel: Write protecting the kernel read-only data: 36864k Nov 13 08:27:16.062739 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Nov 13 08:27:16.062750 kernel: Run /init as init process Nov 13 08:27:16.062758 kernel: with arguments: Nov 13 08:27:16.062767 kernel: /init Nov 13 08:27:16.062778 kernel: with environment: Nov 13 08:27:16.062786 kernel: HOME=/ Nov 13 08:27:16.062794 kernel: TERM=linux Nov 13 08:27:16.062802 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 13 08:27:16.062814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:27:16.062825 systemd[1]: Detected virtualization kvm. Nov 13 08:27:16.062834 systemd[1]: Detected architecture x86-64. Nov 13 08:27:16.062845 systemd[1]: Running in initrd. Nov 13 08:27:16.062854 systemd[1]: No hostname configured, using default hostname. Nov 13 08:27:16.062862 systemd[1]: Hostname set to . Nov 13 08:27:16.062871 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:27:16.062880 systemd[1]: Queued start job for default target initrd.target. Nov 13 08:27:16.062889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:16.062897 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:16.062908 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 13 08:27:16.062919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:27:16.062928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 13 08:27:16.062937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 13 08:27:16.062948 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 13 08:27:16.062957 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 13 08:27:16.062966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:16.062974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:16.062986 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:27:16.062995 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:27:16.063004 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:27:16.063016 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:27:16.063025 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:27:16.063034 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:27:16.063046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 13 08:27:16.063054 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 13 08:27:16.063063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:16.063072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:16.063081 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:16.063091 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:27:16.063099 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 13 08:27:16.063108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:27:16.063119 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 13 08:27:16.063128 systemd[1]: Starting systemd-fsck-usr.service... Nov 13 08:27:16.063136 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:27:16.063145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:27:16.063154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:16.063163 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 13 08:27:16.063199 systemd-journald[184]: Collecting audit messages is disabled. Nov 13 08:27:16.063225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:16.063234 systemd[1]: Finished systemd-fsck-usr.service. Nov 13 08:27:16.063244 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:27:16.063259 systemd-journald[184]: Journal started Nov 13 08:27:16.063279 systemd-journald[184]: Runtime Journal (/run/log/journal/2340ca04f28649fb8cfb49e0f799f78f) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:27:16.056535 systemd-modules-load[185]: Inserted module 'overlay' Nov 13 08:27:16.117068 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:27:16.117118 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 13 08:27:16.117137 kernel: Bridge firewalling registered Nov 13 08:27:16.115612 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 13 08:27:16.117011 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:16.117993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:16.119637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:16.133806 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:16.136715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:27:16.139698 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:27:16.145756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:27:16.166324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:16.168597 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:16.170847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:16.180829 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 13 08:27:16.183335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:16.188296 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:27:16.199693 dracut-cmdline[216]: dracut-dracut-053 Nov 13 08:27:16.206900 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=714367a70d0d672ed3d7ccc2de5247f52d37046778a42409fc8a40b0511373b1 Nov 13 08:27:16.235027 systemd-resolved[221]: Positive Trust Anchors: Nov 13 08:27:16.235042 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:27:16.235077 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:27:16.242510 systemd-resolved[221]: Defaulting to hostname 'linux'. Nov 13 08:27:16.244111 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:27:16.245017 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:16.309562 kernel: SCSI subsystem initialized Nov 13 08:27:16.321535 kernel: Loading iSCSI transport class v2.0-870. Nov 13 08:27:16.337001 kernel: iscsi: registered transport (tcp) Nov 13 08:27:16.364899 kernel: iscsi: registered transport (qla4xxx) Nov 13 08:27:16.365016 kernel: QLogic iSCSI HBA Driver Nov 13 08:27:16.422849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 13 08:27:16.428756 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 13 08:27:16.472500 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 13 08:27:16.472584 kernel: device-mapper: uevent: version 1.0.3 Nov 13 08:27:16.474524 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 13 08:27:16.523539 kernel: raid6: avx2x4 gen() 26277 MB/s Nov 13 08:27:16.540542 kernel: raid6: avx2x2 gen() 28505 MB/s Nov 13 08:27:16.557737 kernel: raid6: avx2x1 gen() 19135 MB/s Nov 13 08:27:16.557817 kernel: raid6: using algorithm avx2x2 gen() 28505 MB/s Nov 13 08:27:16.576603 kernel: raid6: .... xor() 14098 MB/s, rmw enabled Nov 13 08:27:16.576697 kernel: raid6: using avx2x2 recovery algorithm Nov 13 08:27:16.602634 kernel: xor: automatically using best checksumming function avx Nov 13 08:27:16.805538 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 13 08:27:16.822167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:27:16.834970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:16.849271 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 13 08:27:16.853709 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:16.865786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 13 08:27:16.894658 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Nov 13 08:27:16.940906 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:27:16.947014 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:27:17.015391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:17.022795 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 13 08:27:17.062733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 13 08:27:17.064928 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:27:17.067405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:17.068937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:27:17.076713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 13 08:27:17.115161 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:27:17.130714 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 13 08:27:17.216669 kernel: cryptd: max_cpu_qlen set to 1000 Nov 13 08:27:17.216699 kernel: scsi host0: Virtio SCSI HBA Nov 13 08:27:17.216902 kernel: libata version 3.00 loaded. Nov 13 08:27:17.216921 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 13 08:27:17.217082 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 13 08:27:17.217277 kernel: scsi host1: ata_piix Nov 13 08:27:17.217497 kernel: scsi host2: ata_piix Nov 13 08:27:17.217669 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 13 08:27:17.217688 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 13 08:27:17.217705 kernel: AVX2 version of gcm_enc/dec engaged. Nov 13 08:27:17.217726 kernel: AES CTR mode by8 optimization enabled Nov 13 08:27:17.217743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 13 08:27:17.217759 kernel: GPT:9289727 != 125829119 Nov 13 08:27:17.217777 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 13 08:27:17.217801 kernel: GPT:9289727 != 125829119 Nov 13 08:27:17.217817 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 13 08:27:17.217834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:17.217850 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 13 08:27:17.227800 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Nov 13 08:27:17.227990 kernel: ACPI: bus type USB registered Nov 13 08:27:17.228010 kernel: usbcore: registered new interface driver usbfs Nov 13 08:27:17.228027 kernel: usbcore: registered new interface driver hub Nov 13 08:27:17.228043 kernel: usbcore: registered new device driver usb Nov 13 08:27:17.159589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:27:17.159714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:17.160546 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:17.161184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:17.161321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:17.161977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:17.165861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:17.297294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:17.303835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 13 08:27:17.328130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:17.415501 kernel: BTRFS: device fsid d498af32-b44b-4318-a942-3a646ccb9d0a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (466) Nov 13 08:27:17.419506 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (455) Nov 13 08:27:17.421274 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 13 08:27:17.442831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 13 08:27:17.454722 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 13 08:27:17.462227 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 13 08:27:17.462689 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 13 08:27:17.462915 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 13 08:27:17.463109 kernel: hub 1-0:1.0: USB hub found Nov 13 08:27:17.463342 kernel: hub 1-0:1.0: 2 ports detected Nov 13 08:27:17.453368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 13 08:27:17.464628 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 13 08:27:17.471872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:27:17.478898 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 13 08:27:17.503140 disk-uuid[553]: Primary Header is updated. Nov 13 08:27:17.503140 disk-uuid[553]: Secondary Entries is updated. Nov 13 08:27:17.503140 disk-uuid[553]: Secondary Header is updated. Nov 13 08:27:17.509655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:18.532518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 13 08:27:18.532611 disk-uuid[554]: The operation has completed successfully. Nov 13 08:27:18.612324 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 13 08:27:18.612552 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 13 08:27:18.624795 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 13 08:27:18.639301 sh[565]: Success Nov 13 08:27:18.659512 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 13 08:27:18.754790 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 13 08:27:18.771795 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 13 08:27:18.775425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 13 08:27:18.801545 kernel: BTRFS info (device dm-0): first mount of filesystem d498af32-b44b-4318-a942-3a646ccb9d0a Nov 13 08:27:18.801649 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:18.801677 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 13 08:27:18.803715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 13 08:27:18.805210 kernel: BTRFS info (device dm-0): using free space tree Nov 13 08:27:18.821301 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 13 08:27:18.823132 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 13 08:27:18.830907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 13 08:27:18.842832 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 13 08:27:18.873319 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:18.873395 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:18.874662 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:18.884622 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:18.901883 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 13 08:27:18.903802 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:18.912657 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 13 08:27:18.931922 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 13 08:27:19.079022 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:27:19.087893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:27:19.123963 ignition[662]: Ignition 2.20.0 Nov 13 08:27:19.125132 ignition[662]: Stage: fetch-offline Nov 13 08:27:19.125905 ignition[662]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:19.125923 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:19.126094 ignition[662]: parsed url from cmdline: "" Nov 13 08:27:19.126100 ignition[662]: no config URL provided Nov 13 08:27:19.126109 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:27:19.126124 ignition[662]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:27:19.126133 ignition[662]: failed to fetch config: resource requires networking Nov 13 08:27:19.127919 ignition[662]: Ignition finished successfully Nov 13 08:27:19.134923 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:27:19.136600 systemd-networkd[754]: lo: Link UP Nov 13 08:27:19.136606 systemd-networkd[754]: lo: Gained carrier Nov 13 08:27:19.139783 systemd-networkd[754]: Enumeration completed Nov 13 08:27:19.140408 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:27:19.140413 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 13 08:27:19.141687 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:27:19.141692 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 13 08:27:19.142784 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:27:19.143317 systemd-networkd[754]: eth0: Link UP Nov 13 08:27:19.143323 systemd-networkd[754]: eth0: Gained carrier Nov 13 08:27:19.143337 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 13 08:27:19.144625 systemd[1]: Reached target network.target - Network. Nov 13 08:27:19.149996 systemd-networkd[754]: eth1: Link UP Nov 13 08:27:19.150002 systemd-networkd[754]: eth1: Gained carrier Nov 13 08:27:19.150021 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 13 08:27:19.152834 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 13 08:27:19.166297 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.18/20 acquired from 169.254.169.253 Nov 13 08:27:19.169610 systemd-networkd[754]: eth0: DHCPv4 address 64.23.149.40/20, gateway 64.23.144.1 acquired from 169.254.169.253 Nov 13 08:27:19.188859 ignition[758]: Ignition 2.20.0 Nov 13 08:27:19.188875 ignition[758]: Stage: fetch Nov 13 08:27:19.189161 ignition[758]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:19.189177 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:19.189340 ignition[758]: parsed url from cmdline: "" Nov 13 08:27:19.189346 ignition[758]: no config URL provided Nov 13 08:27:19.189355 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Nov 13 08:27:19.189368 ignition[758]: no config at "/usr/lib/ignition/user.ign" Nov 13 08:27:19.189400 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 13 08:27:19.205854 ignition[758]: GET result: OK Nov 13 08:27:19.206918 ignition[758]: parsing config with SHA512: 29bde3a4777153efc146aa79714bb039f184f18d46bed576c5bd8e23191b5a8edccd9534e0a51c7c12862ee333d655113b3009d0fb1b9da1acc6ae870e134785 Nov 13 08:27:19.213970 unknown[758]: fetched base config from "system" Nov 13 08:27:19.213984 unknown[758]: fetched base config from "system" Nov 13 08:27:19.214528 ignition[758]: fetch: fetch complete Nov 13 08:27:19.213991 unknown[758]: fetched user config from "digitalocean" Nov 13 08:27:19.214537 ignition[758]: fetch: fetch passed Nov 13 08:27:19.216473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 13 08:27:19.214614 ignition[758]: Ignition finished successfully Nov 13 08:27:19.223813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 13 08:27:19.266624 ignition[765]: Ignition 2.20.0 Nov 13 08:27:19.266656 ignition[765]: Stage: kargs Nov 13 08:27:19.266990 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:19.267010 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:19.268597 ignition[765]: kargs: kargs passed Nov 13 08:27:19.270380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 13 08:27:19.268707 ignition[765]: Ignition finished successfully Nov 13 08:27:19.279168 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 13 08:27:19.312754 ignition[772]: Ignition 2.20.0 Nov 13 08:27:19.315433 ignition[772]: Stage: disks Nov 13 08:27:19.315729 ignition[772]: no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:19.315742 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:19.317376 ignition[772]: disks: disks passed Nov 13 08:27:19.323588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 13 08:27:19.317455 ignition[772]: Ignition finished successfully Nov 13 08:27:19.336200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 13 08:27:19.337304 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 13 08:27:19.338844 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:27:19.339502 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:27:19.340143 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:27:19.350926 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 13 08:27:19.378310 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 13 08:27:19.383867 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 13 08:27:19.396698 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 13 08:27:19.545561 kernel: EXT4-fs (vda9): mounted filesystem 62325592-ead9-4e81-b706-99baa0cf9fff r/w with ordered data mode. Quota mode: none. Nov 13 08:27:19.547119 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 13 08:27:19.548615 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 13 08:27:19.556764 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:27:19.568871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 13 08:27:19.573775 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 13 08:27:19.578911 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 13 08:27:19.580045 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 13 08:27:19.580095 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:27:19.585212 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 13 08:27:19.590694 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (788) Nov 13 08:27:19.595511 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:19.595598 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:19.598697 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:19.600007 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 13 08:27:19.619541 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:19.628781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:27:19.711686 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Nov 13 08:27:19.720093 coreos-metadata[791]: Nov 13 08:27:19.719 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:19.723686 coreos-metadata[790]: Nov 13 08:27:19.722 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:19.726173 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Nov 13 08:27:19.735389 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Nov 13 08:27:19.736730 coreos-metadata[790]: Nov 13 08:27:19.735 INFO Fetch successful Nov 13 08:27:19.737505 coreos-metadata[791]: Nov 13 08:27:19.735 INFO Fetch successful Nov 13 08:27:19.744097 coreos-metadata[791]: Nov 13 08:27:19.744 INFO wrote hostname ci-4152.0.0-f-d2466dff01 to /sysroot/etc/hostname Nov 13 08:27:19.747069 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:27:19.750420 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 13 08:27:19.752575 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Nov 13 08:27:19.751633 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 13 08:27:19.905073 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 13 08:27:19.910919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 13 08:27:19.919861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 13 08:27:19.932915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 13 08:27:19.937643 kernel: BTRFS info (device vda6): last unmount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:19.960386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 13 08:27:19.979096 ignition[910]: INFO : Ignition 2.20.0 Nov 13 08:27:19.979096 ignition[910]: INFO : Stage: mount Nov 13 08:27:19.980910 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:19.980910 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:19.980910 ignition[910]: INFO : mount: mount passed Nov 13 08:27:19.985375 ignition[910]: INFO : Ignition finished successfully Nov 13 08:27:19.983072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 13 08:27:19.990813 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 13 08:27:20.015852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 13 08:27:20.039559 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (920) Nov 13 08:27:20.043564 kernel: BTRFS info (device vda6): first mount of filesystem 97a326f3-1974-446c-b178-9e746095347a Nov 13 08:27:20.043668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 13 08:27:20.043690 kernel: BTRFS info (device vda6): using free space tree Nov 13 08:27:20.052825 kernel: BTRFS info (device vda6): auto enabling async discard Nov 13 08:27:20.055966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 13 08:27:20.115528 ignition[937]: INFO : Ignition 2.20.0 Nov 13 08:27:20.115528 ignition[937]: INFO : Stage: files Nov 13 08:27:20.115528 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:20.115528 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:20.120314 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Nov 13 08:27:20.120314 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 13 08:27:20.120314 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 13 08:27:20.125982 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 13 08:27:20.127375 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 13 08:27:20.128760 unknown[937]: wrote ssh authorized keys file for user: core Nov 13 08:27:20.130237 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 13 08:27:20.133738 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:27:20.135683 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 13 08:27:20.179028 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 13 08:27:20.286277 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 13 08:27:20.286277 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:27:20.286277 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 13 08:27:20.291718 systemd-networkd[754]: eth0: Gained IPv6LL Nov 13 08:27:20.742907 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 13 08:27:20.824107 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 13 08:27:20.824107 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 13 08:27:20.824107 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 13 08:27:20.824107 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 13 08:27:20.828372 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 13 08:27:20.841705 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 13 08:27:20.841705 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 13 08:27:20.841705 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 13 08:27:20.995876 systemd-networkd[754]: eth1: Gained IPv6LL Nov 13 08:27:21.102612 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 13 08:27:21.453573 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 13 08:27:21.453573 ignition[937]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 13 08:27:21.456611 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:27:21.456611 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 13 08:27:21.456611 ignition[937]: INFO : files: files passed Nov 13 08:27:21.456611 ignition[937]: INFO : Ignition finished successfully Nov 13 08:27:21.457460 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 13 08:27:21.465774 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 13 08:27:21.469763 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 13 08:27:21.479645 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 13 08:27:21.479772 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 13 08:27:21.493206 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:21.493206 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:21.497051 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 13 08:27:21.499899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:27:21.501874 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 13 08:27:21.507806 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 13 08:27:21.565348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 13 08:27:21.566631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 13 08:27:21.569159 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 13 08:27:21.569996 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 13 08:27:21.571648 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 13 08:27:21.577788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 13 08:27:21.608038 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:27:21.616819 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 13 08:27:21.641359 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:21.642280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:21.644935 systemd[1]: Stopped target timers.target - Timer Units. Nov 13 08:27:21.645559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 13 08:27:21.645700 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 13 08:27:21.647981 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 13 08:27:21.648704 systemd[1]: Stopped target basic.target - Basic System. Nov 13 08:27:21.650149 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 13 08:27:21.651575 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 13 08:27:21.652932 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 13 08:27:21.654434 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 13 08:27:21.655912 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 13 08:27:21.657623 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 13 08:27:21.659040 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 13 08:27:21.660440 systemd[1]: Stopped target swap.target - Swaps. Nov 13 08:27:21.661895 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 13 08:27:21.662101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 13 08:27:21.663600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:21.665293 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:21.667286 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 13 08:27:21.667506 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:21.668316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 13 08:27:21.668597 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 13 08:27:21.669685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 13 08:27:21.669923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 13 08:27:21.670907 systemd[1]: ignition-files.service: Deactivated successfully. Nov 13 08:27:21.671125 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 13 08:27:21.671946 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 13 08:27:21.674803 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 13 08:27:21.683898 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 13 08:27:21.685662 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 13 08:27:21.686775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 13 08:27:21.688644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:21.691325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 13 08:27:21.691525 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 13 08:27:21.704435 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 13 08:27:21.705583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 13 08:27:21.712518 ignition[990]: INFO : Ignition 2.20.0 Nov 13 08:27:21.712518 ignition[990]: INFO : Stage: umount Nov 13 08:27:21.712518 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 13 08:27:21.712518 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 13 08:27:21.727713 ignition[990]: INFO : umount: umount passed Nov 13 08:27:21.727713 ignition[990]: INFO : Ignition finished successfully Nov 13 08:27:21.725035 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 13 08:27:21.725206 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 13 08:27:21.727234 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 13 08:27:21.727468 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 13 08:27:21.731097 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 13 08:27:21.731189 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 13 08:27:21.732864 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 13 08:27:21.732925 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 13 08:27:21.735570 systemd[1]: Stopped target network.target - Network. Nov 13 08:27:21.736024 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 13 08:27:21.736111 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 13 08:27:21.736774 systemd[1]: Stopped target paths.target - Path Units. Nov 13 08:27:21.737354 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 13 08:27:21.741817 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:21.743472 systemd[1]: Stopped target slices.target - Slice Units. Nov 13 08:27:21.744537 systemd[1]: Stopped target sockets.target - Socket Units. Nov 13 08:27:21.745951 systemd[1]: iscsid.socket: Deactivated successfully. Nov 13 08:27:21.746020 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 13 08:27:21.747754 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 13 08:27:21.747828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 13 08:27:21.749633 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 13 08:27:21.749714 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 13 08:27:21.751063 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 13 08:27:21.751138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 13 08:27:21.752453 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 13 08:27:21.753472 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 13 08:27:21.756563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 13 08:27:21.757406 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 13 08:27:21.757586 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 13 08:27:21.757684 systemd-networkd[754]: eth0: DHCPv6 lease lost Nov 13 08:27:21.760081 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 13 08:27:21.760214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 13 08:27:21.760701 systemd-networkd[754]: eth1: DHCPv6 lease lost Nov 13 08:27:21.765051 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 13 08:27:21.765373 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 13 08:27:21.767570 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 13 08:27:21.767743 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 13 08:27:21.772832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 13 08:27:21.772941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:21.779826 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 13 08:27:21.780544 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 13 08:27:21.780626 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 13 08:27:21.783757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:27:21.783850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:21.784638 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 13 08:27:21.784716 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:21.786812 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 13 08:27:21.786867 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:21.788336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:21.803933 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 13 08:27:21.804167 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:21.805425 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 13 08:27:21.806787 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 13 08:27:21.809124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 13 08:27:21.809201 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:21.810766 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 13 08:27:21.810819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:21.812124 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 13 08:27:21.812209 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 13 08:27:21.814192 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 13 08:27:21.814272 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 13 08:27:21.815734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 13 08:27:21.815825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 13 08:27:21.823821 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 13 08:27:21.824738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 13 08:27:21.824860 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:21.827075 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 13 08:27:21.827159 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:21.831298 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 13 08:27:21.831383 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:21.832157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:21.832232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:21.836661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 13 08:27:21.837015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 13 08:27:21.838058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 13 08:27:21.846935 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 13 08:27:21.857445 systemd[1]: Switching root. Nov 13 08:27:22.055826 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 13 08:27:22.055926 systemd-journald[184]: Journal stopped Nov 13 08:27:23.361770 kernel: SELinux: policy capability network_peer_controls=1 Nov 13 08:27:23.361873 kernel: SELinux: policy capability open_perms=1 Nov 13 08:27:23.361895 kernel: SELinux: policy capability extended_socket_class=1 Nov 13 08:27:23.361919 kernel: SELinux: policy capability always_check_network=0 Nov 13 08:27:23.361937 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 13 08:27:23.361953 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 13 08:27:23.361971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 13 08:27:23.361989 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 13 08:27:23.362012 kernel: audit: type=1403 audit(1731486442.207:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 13 08:27:23.362042 systemd[1]: Successfully loaded SELinux policy in 43.080ms. Nov 13 08:27:23.362071 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.792ms. Nov 13 08:27:23.362091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 13 08:27:23.362111 systemd[1]: Detected virtualization kvm. Nov 13 08:27:23.362130 systemd[1]: Detected architecture x86-64. Nov 13 08:27:23.362149 systemd[1]: Detected first boot. Nov 13 08:27:23.362170 systemd[1]: Hostname set to . Nov 13 08:27:23.362189 systemd[1]: Initializing machine ID from VM UUID. Nov 13 08:27:23.362213 zram_generator::config[1035]: No configuration found. Nov 13 08:27:23.362233 systemd[1]: Populated /etc with preset unit settings. Nov 13 08:27:23.362252 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 13 08:27:23.362272 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 13 08:27:23.362292 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 13 08:27:23.362312 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 13 08:27:23.362331 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 13 08:27:23.362356 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 13 08:27:23.362377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 13 08:27:23.362396 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 13 08:27:23.362416 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 13 08:27:23.362436 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 13 08:27:23.362456 systemd[1]: Created slice user.slice - User and Session Slice. Nov 13 08:27:23.362514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 13 08:27:23.362535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 13 08:27:23.362555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 13 08:27:23.362574 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 13 08:27:23.362598 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 13 08:27:23.362617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 13 08:27:23.362635 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 13 08:27:23.362654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 13 08:27:23.362674 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 13 08:27:23.362692 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 13 08:27:23.362716 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 13 08:27:23.362739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 13 08:27:23.362758 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 13 08:27:23.362778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 13 08:27:23.362797 systemd[1]: Reached target slices.target - Slice Units. Nov 13 08:27:23.362817 systemd[1]: Reached target swap.target - Swaps. Nov 13 08:27:23.362836 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 13 08:27:23.362855 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 13 08:27:23.362874 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 13 08:27:23.362894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 13 08:27:23.362917 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 13 08:27:23.362937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 13 08:27:23.362955 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 13 08:27:23.362980 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 13 08:27:23.363000 systemd[1]: Mounting media.mount - External Media Directory... Nov 13 08:27:23.363019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:23.363038 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 13 08:27:23.363058 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 13 08:27:23.363081 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 13 08:27:23.363103 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 13 08:27:23.363123 systemd[1]: Reached target machines.target - Containers. Nov 13 08:27:23.363141 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 13 08:27:23.363160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:23.363180 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 13 08:27:23.363199 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 13 08:27:23.363217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:23.363232 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:27:23.363254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:23.363274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 13 08:27:23.363293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:23.363315 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 13 08:27:23.363334 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 13 08:27:23.363354 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 13 08:27:23.363372 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 13 08:27:23.363391 systemd[1]: Stopped systemd-fsck-usr.service. Nov 13 08:27:23.363415 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 13 08:27:23.363434 kernel: loop: module loaded Nov 13 08:27:23.363454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 13 08:27:23.363473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 13 08:27:23.363511 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 13 08:27:23.363531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 13 08:27:23.363552 systemd[1]: verity-setup.service: Deactivated successfully. Nov 13 08:27:23.363571 kernel: ACPI: bus type drm_connector registered Nov 13 08:27:23.363590 systemd[1]: Stopped verity-setup.service. Nov 13 08:27:23.363613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:23.363633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 13 08:27:23.363654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 13 08:27:23.365633 kernel: fuse: init (API version 7.39) Nov 13 08:27:23.365681 systemd[1]: Mounted media.mount - External Media Directory. Nov 13 08:27:23.365706 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 13 08:27:23.365733 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 13 08:27:23.365752 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 13 08:27:23.365772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 13 08:27:23.365794 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 13 08:27:23.365817 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 13 08:27:23.365836 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:23.365855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:23.365875 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:27:23.365893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:27:23.365912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:23.365931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:23.365949 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 13 08:27:23.365968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 13 08:27:23.365990 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:23.366009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:23.366028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 13 08:27:23.366040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 13 08:27:23.366052 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 13 08:27:23.366064 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 13 08:27:23.366076 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 13 08:27:23.366147 systemd-journald[1108]: Collecting audit messages is disabled. Nov 13 08:27:23.366182 systemd-journald[1108]: Journal started Nov 13 08:27:23.366207 systemd-journald[1108]: Runtime Journal (/run/log/journal/2340ca04f28649fb8cfb49e0f799f78f) is 4.9M, max 39.3M, 34.4M free. Nov 13 08:27:22.918665 systemd[1]: Queued start job for default target multi-user.target. Nov 13 08:27:22.941272 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 13 08:27:22.941984 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 13 08:27:23.378999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 13 08:27:23.385642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 13 08:27:23.385731 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 13 08:27:23.390522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 13 08:27:23.398646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 13 08:27:23.408518 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 13 08:27:23.414876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:23.424527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 13 08:27:23.429524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:23.447329 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 13 08:27:23.450677 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:23.463131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:27:23.467528 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 13 08:27:23.477543 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 13 08:27:23.488947 systemd[1]: Started systemd-journald.service - Journal Service. Nov 13 08:27:23.485580 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 13 08:27:23.486730 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 13 08:27:23.487829 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 13 08:27:23.489345 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 13 08:27:23.490749 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 13 08:27:23.527581 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 13 08:27:23.547221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 13 08:27:23.556798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 13 08:27:23.569777 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 13 08:27:23.580844 kernel: loop0: detected capacity change from 0 to 205544 Nov 13 08:27:23.581864 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 13 08:27:23.586176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:27:23.628818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 13 08:27:23.643180 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 13 08:27:23.644837 systemd-journald[1108]: Time spent on flushing to /var/log/journal/2340ca04f28649fb8cfb49e0f799f78f is 44.427ms for 1000 entries. Nov 13 08:27:23.644837 systemd-journald[1108]: System Journal (/var/log/journal/2340ca04f28649fb8cfb49e0f799f78f) is 8.0M, max 195.6M, 187.6M free. Nov 13 08:27:23.728369 systemd-journald[1108]: Received client request to flush runtime journal. Nov 13 08:27:23.728438 kernel: loop1: detected capacity change from 0 to 8 Nov 13 08:27:23.728466 kernel: loop2: detected capacity change from 0 to 138184 Nov 13 08:27:23.672053 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Nov 13 08:27:23.672068 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Nov 13 08:27:23.688747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 13 08:27:23.690880 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 13 08:27:23.692767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 13 08:27:23.700947 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 13 08:27:23.734894 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 13 08:27:23.769016 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 13 08:27:23.774570 kernel: loop3: detected capacity change from 0 to 140992 Nov 13 08:27:23.775742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 13 08:27:23.835170 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Nov 13 08:27:23.835192 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Nov 13 08:27:23.849302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 13 08:27:23.859519 kernel: loop4: detected capacity change from 0 to 205544 Nov 13 08:27:23.888471 kernel: loop5: detected capacity change from 0 to 8 Nov 13 08:27:23.891451 kernel: loop6: detected capacity change from 0 to 138184 Nov 13 08:27:23.912552 kernel: loop7: detected capacity change from 0 to 140992 Nov 13 08:27:23.933125 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 13 08:27:23.933764 (sd-merge)[1184]: Merged extensions into '/usr'. Nov 13 08:27:23.942235 systemd[1]: Reloading requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Nov 13 08:27:23.942257 systemd[1]: Reloading... Nov 13 08:27:24.149039 zram_generator::config[1210]: No configuration found. Nov 13 08:27:24.213404 ldconfig[1133]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 13 08:27:24.383241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:24.458049 systemd[1]: Reloading finished in 514 ms. Nov 13 08:27:24.486075 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 13 08:27:24.489898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 13 08:27:24.504897 systemd[1]: Starting ensure-sysext.service... Nov 13 08:27:24.511576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 13 08:27:24.526674 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Nov 13 08:27:24.526710 systemd[1]: Reloading... Nov 13 08:27:24.575221 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 13 08:27:24.576695 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 13 08:27:24.578912 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 13 08:27:24.579168 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 13 08:27:24.579224 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Nov 13 08:27:24.592311 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:27:24.592332 systemd-tmpfiles[1254]: Skipping /boot Nov 13 08:27:24.621841 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Nov 13 08:27:24.621856 systemd-tmpfiles[1254]: Skipping /boot Nov 13 08:27:24.670669 zram_generator::config[1279]: No configuration found. Nov 13 08:27:24.823225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:24.875212 systemd[1]: Reloading finished in 348 ms. Nov 13 08:27:24.894520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 13 08:27:24.900414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 13 08:27:24.918772 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:27:24.922837 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 13 08:27:24.928502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 13 08:27:24.939889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 13 08:27:24.946831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 13 08:27:24.951810 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 13 08:27:24.963075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:24.963376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:24.974960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:24.978928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:24.991064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:24.992766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:24.992986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.006939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 13 08:27:25.009448 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 13 08:27:25.021010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:25.024003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:25.028271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.029741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:25.030904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:25.040371 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 13 08:27:25.042632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.060708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 13 08:27:25.064197 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:25.064573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:25.067676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:25.068581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:25.078975 systemd[1]: Finished ensure-sysext.service. Nov 13 08:27:25.080468 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 13 08:27:25.091872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.092045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:25.097317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:25.101742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 13 08:27:25.102763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:25.102837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:25.108800 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 13 08:27:25.110773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 08:27:25.110820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.125319 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 13 08:27:25.138990 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Nov 13 08:27:25.148121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:25.148460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:25.151179 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 13 08:27:25.153165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:25.160978 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 13 08:27:25.161576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 13 08:27:25.180840 augenrules[1373]: No rules Nov 13 08:27:25.184262 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:27:25.185148 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:27:25.191910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 13 08:27:25.200872 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 13 08:27:25.268232 systemd-resolved[1329]: Positive Trust Anchors: Nov 13 08:27:25.268657 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 13 08:27:25.268753 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 13 08:27:25.276705 systemd-resolved[1329]: Using system hostname 'ci-4152.0.0-f-d2466dff01'. Nov 13 08:27:25.279194 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 13 08:27:25.280295 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 13 08:27:25.314691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 13 08:27:25.316718 systemd[1]: Reached target time-set.target - System Time Set. Nov 13 08:27:25.341349 systemd-networkd[1383]: lo: Link UP Nov 13 08:27:25.341361 systemd-networkd[1383]: lo: Gained carrier Nov 13 08:27:25.342146 systemd-networkd[1383]: Enumeration completed Nov 13 08:27:25.342347 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 13 08:27:25.343616 systemd[1]: Reached target network.target - Network. Nov 13 08:27:25.350846 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 13 08:27:25.368515 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1387) Nov 13 08:27:25.379515 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1387) Nov 13 08:27:25.379813 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 13 08:27:25.396751 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 13 08:27:25.397322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.397456 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 13 08:27:25.399846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 13 08:27:25.411248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 13 08:27:25.415623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 13 08:27:25.432193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 13 08:27:25.432262 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 13 08:27:25.432279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 13 08:27:25.434797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 13 08:27:25.435044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 13 08:27:25.438065 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 13 08:27:25.440586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 13 08:27:25.448631 systemd-networkd[1383]: eth1: Configuring with /run/systemd/network/10-22:ac:92:49:fc:ff.network. Nov 13 08:27:25.449049 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 13 08:27:25.451961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 13 08:27:25.452396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 13 08:27:25.454647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 13 08:27:25.456210 kernel: ISO 9660 Extensions: RRIP_1991A Nov 13 08:27:25.455848 systemd-networkd[1383]: eth1: Link UP Nov 13 08:27:25.455855 systemd-networkd[1383]: eth1: Gained carrier Nov 13 08:27:25.461302 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:25.463550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1391) Nov 13 08:27:25.478951 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 13 08:27:25.538516 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 13 08:27:25.544045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 13 08:27:25.549985 systemd-networkd[1383]: eth0: Configuring with /run/systemd/network/10-76:5b:df:4f:4f:40.network. Nov 13 08:27:25.552062 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:25.552754 systemd-networkd[1383]: eth0: Link UP Nov 13 08:27:25.552768 systemd-networkd[1383]: eth0: Gained carrier Nov 13 08:27:25.554608 kernel: ACPI: button: Power Button [PWRF] Nov 13 08:27:25.554099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 13 08:27:25.556611 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:25.558619 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:25.564663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 13 08:27:25.596927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 13 08:27:25.598560 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 13 08:27:25.667513 kernel: mousedev: PS/2 mouse device common for all mice Nov 13 08:27:25.670909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:25.689143 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 13 08:27:25.689228 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 13 08:27:25.693975 kernel: Console: switching to colour dummy device 80x25 Nov 13 08:27:25.695534 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 13 08:27:25.695605 kernel: [drm] features: -context_init Nov 13 08:27:25.696672 kernel: [drm] number of scanouts: 1 Nov 13 08:27:25.697513 kernel: [drm] number of cap sets: 0 Nov 13 08:27:25.701526 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 13 08:27:25.705515 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 13 08:27:25.710758 kernel: Console: switching to colour frame buffer device 128x48 Nov 13 08:27:25.715559 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 13 08:27:25.739970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 13 08:27:25.740615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:25.768099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 13 08:27:25.912620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 13 08:27:25.921525 kernel: EDAC MC: Ver: 3.0.0 Nov 13 08:27:25.950744 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 13 08:27:25.955838 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 13 08:27:25.980556 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:27:26.006034 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 13 08:27:26.007572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 13 08:27:26.007741 systemd[1]: Reached target sysinit.target - System Initialization. Nov 13 08:27:26.007992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 13 08:27:26.008165 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 13 08:27:26.008586 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 13 08:27:26.008909 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 13 08:27:26.009041 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 13 08:27:26.009161 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 13 08:27:26.009208 systemd[1]: Reached target paths.target - Path Units. Nov 13 08:27:26.009292 systemd[1]: Reached target timers.target - Timer Units. Nov 13 08:27:26.011783 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 13 08:27:26.014284 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 13 08:27:26.022169 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 13 08:27:26.033783 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 13 08:27:26.036150 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 13 08:27:26.036758 systemd[1]: Reached target sockets.target - Socket Units. Nov 13 08:27:26.037175 systemd[1]: Reached target basic.target - Basic System. Nov 13 08:27:26.039994 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:27:26.040030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 13 08:27:26.041208 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 13 08:27:26.041876 systemd[1]: Starting containerd.service - containerd container runtime... Nov 13 08:27:26.060772 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 13 08:27:26.065109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 13 08:27:26.075726 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 13 08:27:26.080802 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 13 08:27:26.082338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 13 08:27:26.092706 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 13 08:27:26.100703 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 13 08:27:26.105010 jq[1447]: false Nov 13 08:27:26.115837 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 13 08:27:26.126372 extend-filesystems[1450]: Found loop4 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found loop5 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found loop6 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found loop7 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda1 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda2 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda3 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found usr Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda4 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda6 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda7 Nov 13 08:27:26.133889 extend-filesystems[1450]: Found vda9 Nov 13 08:27:26.133889 extend-filesystems[1450]: Checking size of /dev/vda9 Nov 13 08:27:26.127850 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 13 08:27:26.139342 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 13 08:27:26.147386 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 13 08:27:26.148807 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 13 08:27:26.168463 coreos-metadata[1445]: Nov 13 08:27:26.168 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:26.170709 systemd[1]: Starting update-engine.service - Update Engine... Nov 13 08:27:26.174698 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 13 08:27:26.178724 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 13 08:27:26.180914 coreos-metadata[1445]: Nov 13 08:27:26.180 INFO Fetch successful Nov 13 08:27:26.191922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 13 08:27:26.192173 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 13 08:27:26.194431 dbus-daemon[1446]: [system] SELinux support is enabled Nov 13 08:27:26.202431 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 13 08:27:26.217177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 13 08:27:26.217403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 13 08:27:26.226330 extend-filesystems[1450]: Resized partition /dev/vda9 Nov 13 08:27:26.248327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 13 08:27:26.256427 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 13 08:27:26.256461 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 13 08:27:26.258572 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 13 08:27:26.258709 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 13 08:27:26.258740 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 13 08:27:26.260025 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Nov 13 08:27:26.278119 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 13 08:27:26.278220 update_engine[1460]: I20241113 08:27:26.278012 1460 main.cc:92] Flatcar Update Engine starting Nov 13 08:27:26.281292 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 13 08:27:26.281679 systemd[1]: motdgen.service: Deactivated successfully. Nov 13 08:27:26.281976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 13 08:27:26.297605 jq[1463]: true Nov 13 08:27:26.308397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1391) Nov 13 08:27:26.304979 systemd[1]: Started update-engine.service - Update Engine. Nov 13 08:27:26.308724 update_engine[1460]: I20241113 08:27:26.305389 1460 update_check_scheduler.cc:74] Next update check in 9m31s Nov 13 08:27:26.320190 tar[1468]: linux-amd64/helm Nov 13 08:27:26.323979 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 13 08:27:26.350892 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 13 08:27:26.352363 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 13 08:27:26.380045 jq[1488]: true Nov 13 08:27:26.418168 systemd-logind[1457]: New seat seat0. Nov 13 08:27:26.429551 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Nov 13 08:27:26.429575 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 13 08:27:26.429864 systemd[1]: Started systemd-logind.service - User Login Management. Nov 13 08:27:26.509024 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 13 08:27:26.557967 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 13 08:27:26.571815 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 13 08:27:26.571815 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 13 08:27:26.571815 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 13 08:27:26.575081 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Nov 13 08:27:26.575081 extend-filesystems[1450]: Found vdb Nov 13 08:27:26.576309 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 13 08:27:26.576637 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 13 08:27:26.597852 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:27:26.599585 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 13 08:27:26.611961 systemd[1]: Starting sshkeys.service... Nov 13 08:27:26.628670 systemd-networkd[1383]: eth0: Gained IPv6LL Nov 13 08:27:26.629738 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:26.634950 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 13 08:27:26.639394 systemd[1]: Reached target network-online.target - Network is Online. Nov 13 08:27:26.649962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:26.660903 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 13 08:27:26.676725 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 13 08:27:26.692075 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 13 08:27:26.789789 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 13 08:27:26.823673 coreos-metadata[1525]: Nov 13 08:27:26.823 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 13 08:27:26.841508 coreos-metadata[1525]: Nov 13 08:27:26.838 INFO Fetch successful Nov 13 08:27:26.872372 unknown[1525]: wrote ssh authorized keys file for user: core Nov 13 08:27:26.941753 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Nov 13 08:27:26.945799 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 13 08:27:26.949830 systemd[1]: Finished sshkeys.service. Nov 13 08:27:26.993524 containerd[1482]: time="2024-11-13T08:27:26.991356759Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 13 08:27:27.094052 containerd[1482]: time="2024-11-13T08:27:27.093966704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.101811 containerd[1482]: time="2024-11-13T08:27:27.101730115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.102632127Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.102696032Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.102955921Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.102988026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103074357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103095031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103362403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103386596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103408225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103423867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103574507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.104508 containerd[1482]: time="2024-11-13T08:27:27.103898468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 13 08:27:27.106655 containerd[1482]: time="2024-11-13T08:27:27.104075214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 13 08:27:27.106655 containerd[1482]: time="2024-11-13T08:27:27.104099205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 13 08:27:27.106655 containerd[1482]: time="2024-11-13T08:27:27.104213835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 13 08:27:27.106655 containerd[1482]: time="2024-11-13T08:27:27.104280197Z" level=info msg="metadata content store policy set" policy=shared Nov 13 08:27:27.119227 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.119816331Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.119925362Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.119954494Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.119979485Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120003172Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120246197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120651558Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120833658Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120863781Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120889104Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120910787Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120932058Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120951414Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.121846 containerd[1482]: time="2024-11-13T08:27:27.120975702Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.120997643Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121022818Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121043158Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121062941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121093107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121113987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121132151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121150899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121169779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121250890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121278939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121301599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121324193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122364 containerd[1482]: time="2024-11-13T08:27:27.121350339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122843 containerd[1482]: time="2024-11-13T08:27:27.121369561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122843 containerd[1482]: time="2024-11-13T08:27:27.121389962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122843 containerd[1482]: time="2024-11-13T08:27:27.121408616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.122843 containerd[1482]: time="2024-11-13T08:27:27.121431183Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 13 08:27:27.122843 containerd[1482]: time="2024-11-13T08:27:27.121465795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124203088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124390560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124558202Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124617235Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124640322Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124661960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124698846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124720287Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124737004Z" level=info msg="NRI interface is disabled by configuration." Nov 13 08:27:27.125787 containerd[1482]: time="2024-11-13T08:27:27.124753305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 13 08:27:27.128718 containerd[1482]: time="2024-11-13T08:27:27.127788105Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 13 08:27:27.128718 containerd[1482]: time="2024-11-13T08:27:27.127906048Z" level=info msg="Connect containerd service" Nov 13 08:27:27.128718 containerd[1482]: time="2024-11-13T08:27:27.128029006Z" level=info msg="using legacy CRI server" Nov 13 08:27:27.128718 containerd[1482]: time="2024-11-13T08:27:27.128050430Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 13 08:27:27.132267 containerd[1482]: time="2024-11-13T08:27:27.131058124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 13 08:27:27.135210 containerd[1482]: time="2024-11-13T08:27:27.134135519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.136847876Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.136933762Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137316364Z" level=info msg="Start subscribing containerd event" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137544336Z" level=info msg="Start recovering state" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137653600Z" level=info msg="Start event monitor" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137677320Z" level=info msg="Start snapshots syncer" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137690891Z" level=info msg="Start cni network conf syncer for default" Nov 13 08:27:27.138590 containerd[1482]: time="2024-11-13T08:27:27.137710047Z" level=info msg="Start streaming server" Nov 13 08:27:27.138113 systemd[1]: Started containerd.service - containerd container runtime. Nov 13 08:27:27.141972 containerd[1482]: time="2024-11-13T08:27:27.140981838Z" level=info msg="containerd successfully booted in 0.155624s" Nov 13 08:27:27.212241 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 13 08:27:27.226289 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 13 08:27:27.236069 systemd[1]: Started sshd@0-64.23.149.40:22-139.178.89.65:50034.service - OpenSSH per-connection server daemon (139.178.89.65:50034). Nov 13 08:27:27.275468 systemd[1]: issuegen.service: Deactivated successfully. Nov 13 08:27:27.277615 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 13 08:27:27.288200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 13 08:27:27.341927 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 13 08:27:27.355189 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 13 08:27:27.358425 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 13 08:27:27.359881 systemd[1]: Reached target getty.target - Login Prompts. Nov 13 08:27:27.395767 sshd[1553]: Accepted publickey for core from 139.178.89.65 port 50034 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:27.395686 systemd-networkd[1383]: eth1: Gained IPv6LL Nov 13 08:27:27.396076 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:27.398995 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:27.419951 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 13 08:27:27.429849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 13 08:27:27.441551 systemd-logind[1457]: New session 1 of user core. Nov 13 08:27:27.477069 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 13 08:27:27.490924 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 13 08:27:27.509005 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 13 08:27:27.677413 systemd[1565]: Queued start job for default target default.target. Nov 13 08:27:27.683497 systemd[1565]: Created slice app.slice - User Application Slice. Nov 13 08:27:27.684554 systemd[1565]: Reached target paths.target - Paths. Nov 13 08:27:27.684574 systemd[1565]: Reached target timers.target - Timers. Nov 13 08:27:27.687634 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 13 08:27:27.718870 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 13 08:27:27.720628 systemd[1565]: Reached target sockets.target - Sockets. Nov 13 08:27:27.720658 systemd[1565]: Reached target basic.target - Basic System. Nov 13 08:27:27.720717 systemd[1565]: Reached target default.target - Main User Target. Nov 13 08:27:27.720750 systemd[1565]: Startup finished in 200ms. Nov 13 08:27:27.723005 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 13 08:27:27.732718 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 13 08:27:27.774835 tar[1468]: linux-amd64/LICENSE Nov 13 08:27:27.774835 tar[1468]: linux-amd64/README.md Nov 13 08:27:27.792768 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 13 08:27:27.821935 systemd[1]: Started sshd@1-64.23.149.40:22-139.178.89.65:50038.service - OpenSSH per-connection server daemon (139.178.89.65:50038). Nov 13 08:27:27.916753 sshd[1579]: Accepted publickey for core from 139.178.89.65 port 50038 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:27.917368 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:27.926770 systemd-logind[1457]: New session 2 of user core. Nov 13 08:27:27.930844 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 13 08:27:28.000326 sshd[1581]: Connection closed by 139.178.89.65 port 50038 Nov 13 08:27:28.004926 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:28.013559 systemd[1]: sshd@1-64.23.149.40:22-139.178.89.65:50038.service: Deactivated successfully. Nov 13 08:27:28.019413 systemd[1]: session-2.scope: Deactivated successfully. Nov 13 08:27:28.021051 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Nov 13 08:27:28.031188 systemd[1]: Started sshd@2-64.23.149.40:22-139.178.89.65:50046.service - OpenSSH per-connection server daemon (139.178.89.65:50046). Nov 13 08:27:28.037614 systemd-logind[1457]: Removed session 2. Nov 13 08:27:28.086896 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 50046 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:28.088836 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:28.095823 systemd-logind[1457]: New session 3 of user core. Nov 13 08:27:28.102875 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 13 08:27:28.177526 sshd[1588]: Connection closed by 139.178.89.65 port 50046 Nov 13 08:27:28.177650 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:28.186890 systemd[1]: sshd@2-64.23.149.40:22-139.178.89.65:50046.service: Deactivated successfully. Nov 13 08:27:28.189752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:28.193980 systemd[1]: session-3.scope: Deactivated successfully. Nov 13 08:27:28.195898 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Nov 13 08:27:28.203099 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:28.203344 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 13 08:27:28.204874 systemd[1]: Startup finished in 2.060s (kernel) + 6.432s (initrd) + 6.039s (userspace) = 14.532s. Nov 13 08:27:28.206843 systemd-logind[1457]: Removed session 3. Nov 13 08:27:28.971527 kubelet[1595]: E1113 08:27:28.971447 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:28.973771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:28.973957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:28.974289 systemd[1]: kubelet.service: Consumed 1.439s CPU time. Nov 13 08:27:38.201891 systemd[1]: Started sshd@3-64.23.149.40:22-139.178.89.65:56972.service - OpenSSH per-connection server daemon (139.178.89.65:56972). Nov 13 08:27:38.254894 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 56972 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:38.256417 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:38.261886 systemd-logind[1457]: New session 4 of user core. Nov 13 08:27:38.269833 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 13 08:27:38.331645 sshd[1611]: Connection closed by 139.178.89.65 port 56972 Nov 13 08:27:38.332267 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:38.344637 systemd[1]: sshd@3-64.23.149.40:22-139.178.89.65:56972.service: Deactivated successfully. Nov 13 08:27:38.346829 systemd[1]: session-4.scope: Deactivated successfully. Nov 13 08:27:38.347646 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Nov 13 08:27:38.352877 systemd[1]: Started sshd@4-64.23.149.40:22-139.178.89.65:56980.service - OpenSSH per-connection server daemon (139.178.89.65:56980). Nov 13 08:27:38.355107 systemd-logind[1457]: Removed session 4. Nov 13 08:27:38.411673 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 56980 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:38.413529 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:38.420193 systemd-logind[1457]: New session 5 of user core. Nov 13 08:27:38.425767 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 13 08:27:38.482316 sshd[1619]: Connection closed by 139.178.89.65 port 56980 Nov 13 08:27:38.482974 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:38.491442 systemd[1]: sshd@4-64.23.149.40:22-139.178.89.65:56980.service: Deactivated successfully. Nov 13 08:27:38.493524 systemd[1]: session-5.scope: Deactivated successfully. Nov 13 08:27:38.495655 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Nov 13 08:27:38.501901 systemd[1]: Started sshd@5-64.23.149.40:22-139.178.89.65:56996.service - OpenSSH per-connection server daemon (139.178.89.65:56996). Nov 13 08:27:38.503601 systemd-logind[1457]: Removed session 5. Nov 13 08:27:38.558034 sshd[1624]: Accepted publickey for core from 139.178.89.65 port 56996 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:38.559689 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:38.565783 systemd-logind[1457]: New session 6 of user core. Nov 13 08:27:38.572862 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 13 08:27:38.637725 sshd[1626]: Connection closed by 139.178.89.65 port 56996 Nov 13 08:27:38.638564 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:38.652594 systemd[1]: sshd@5-64.23.149.40:22-139.178.89.65:56996.service: Deactivated successfully. Nov 13 08:27:38.654827 systemd[1]: session-6.scope: Deactivated successfully. Nov 13 08:27:38.656886 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Nov 13 08:27:38.673019 systemd[1]: Started sshd@6-64.23.149.40:22-139.178.89.65:57004.service - OpenSSH per-connection server daemon (139.178.89.65:57004). Nov 13 08:27:38.675241 systemd-logind[1457]: Removed session 6. Nov 13 08:27:38.726338 sshd[1631]: Accepted publickey for core from 139.178.89.65 port 57004 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:38.728768 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:38.735776 systemd-logind[1457]: New session 7 of user core. Nov 13 08:27:38.741754 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 13 08:27:38.814115 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 13 08:27:38.814431 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:38.827746 sudo[1634]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:38.832725 sshd[1633]: Connection closed by 139.178.89.65 port 57004 Nov 13 08:27:38.831732 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:38.846664 systemd[1]: sshd@6-64.23.149.40:22-139.178.89.65:57004.service: Deactivated successfully. Nov 13 08:27:38.848831 systemd[1]: session-7.scope: Deactivated successfully. Nov 13 08:27:38.851769 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Nov 13 08:27:38.856878 systemd[1]: Started sshd@7-64.23.149.40:22-139.178.89.65:57006.service - OpenSSH per-connection server daemon (139.178.89.65:57006). Nov 13 08:27:38.858244 systemd-logind[1457]: Removed session 7. Nov 13 08:27:38.922269 sshd[1639]: Accepted publickey for core from 139.178.89.65 port 57006 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:38.924319 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:38.931401 systemd-logind[1457]: New session 8 of user core. Nov 13 08:27:38.936764 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 13 08:27:38.997117 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 13 08:27:38.997463 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:38.998437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 13 08:27:39.002977 sudo[1643]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:39.005890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:39.010959 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 13 08:27:39.011750 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:39.041725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 13 08:27:39.087345 augenrules[1668]: No rules Nov 13 08:27:39.089816 systemd[1]: audit-rules.service: Deactivated successfully. Nov 13 08:27:39.091298 sudo[1642]: pam_unix(sudo:session): session closed for user root Nov 13 08:27:39.089998 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 13 08:27:39.095983 sshd[1641]: Connection closed by 139.178.89.65 port 57006 Nov 13 08:27:39.096590 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Nov 13 08:27:39.110299 systemd[1]: sshd@7-64.23.149.40:22-139.178.89.65:57006.service: Deactivated successfully. Nov 13 08:27:39.113930 systemd[1]: session-8.scope: Deactivated successfully. Nov 13 08:27:39.118639 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Nov 13 08:27:39.126326 systemd[1]: Started sshd@8-64.23.149.40:22-139.178.89.65:57010.service - OpenSSH per-connection server daemon (139.178.89.65:57010). Nov 13 08:27:39.134391 systemd-logind[1457]: Removed session 8. Nov 13 08:27:39.160717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:39.166737 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:39.181984 sshd[1676]: Accepted publickey for core from 139.178.89.65 port 57010 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:27:39.184026 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:27:39.195501 systemd-logind[1457]: New session 9 of user core. Nov 13 08:27:39.199793 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 13 08:27:39.228778 kubelet[1683]: E1113 08:27:39.228701 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:39.232527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:39.232749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:39.263210 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 13 08:27:39.263535 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 13 08:27:39.740915 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 13 08:27:39.742388 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 13 08:27:40.219894 dockerd[1710]: time="2024-11-13T08:27:40.219800086Z" level=info msg="Starting up" Nov 13 08:27:40.389942 systemd[1]: var-lib-docker-metacopy\x2dcheck2687929599-merged.mount: Deactivated successfully. Nov 13 08:27:40.425192 dockerd[1710]: time="2024-11-13T08:27:40.424929154Z" level=info msg="Loading containers: start." Nov 13 08:27:40.637539 kernel: Initializing XFRM netlink socket Nov 13 08:27:40.670396 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. Nov 13 08:27:42.124110 systemd-timesyncd[1360]: Contacted time server 173.73.96.68:123 (2.flatcar.pool.ntp.org). Nov 13 08:27:42.124588 systemd-timesyncd[1360]: Initial clock synchronization to Wed 2024-11-13 08:27:42.123830 UTC. Nov 13 08:27:42.124795 systemd-resolved[1329]: Clock change detected. Flushing caches. Nov 13 08:27:42.134824 systemd-networkd[1383]: docker0: Link UP Nov 13 08:27:42.179507 dockerd[1710]: time="2024-11-13T08:27:42.178770562Z" level=info msg="Loading containers: done." Nov 13 08:27:42.198092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck386752925-merged.mount: Deactivated successfully. Nov 13 08:27:42.201402 dockerd[1710]: time="2024-11-13T08:27:42.201327547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 13 08:27:42.201544 dockerd[1710]: time="2024-11-13T08:27:42.201459456Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Nov 13 08:27:42.201630 dockerd[1710]: time="2024-11-13T08:27:42.201584714Z" level=info msg="Daemon has completed initialization" Nov 13 08:27:42.250537 dockerd[1710]: time="2024-11-13T08:27:42.250418729Z" level=info msg="API listen on /run/docker.sock" Nov 13 08:27:42.250855 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 13 08:27:43.096915 containerd[1482]: time="2024-11-13T08:27:43.096334415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 13 08:27:43.764588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844766987.mount: Deactivated successfully. Nov 13 08:27:45.526535 containerd[1482]: time="2024-11-13T08:27:45.526438643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:45.528398 containerd[1482]: time="2024-11-13T08:27:45.528307276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27975588" Nov 13 08:27:45.529823 containerd[1482]: time="2024-11-13T08:27:45.529158546Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:45.535375 containerd[1482]: time="2024-11-13T08:27:45.535273283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:45.537164 containerd[1482]: time="2024-11-13T08:27:45.536956658Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 2.44056628s" Nov 13 08:27:45.537164 containerd[1482]: time="2024-11-13T08:27:45.537012403Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 13 08:27:45.539752 containerd[1482]: time="2024-11-13T08:27:45.539693744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 13 08:27:47.510822 containerd[1482]: time="2024-11-13T08:27:47.510670737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.512600 containerd[1482]: time="2024-11-13T08:27:47.512532161Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24701922" Nov 13 08:27:47.514098 containerd[1482]: time="2024-11-13T08:27:47.513957475Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.519734 containerd[1482]: time="2024-11-13T08:27:47.518652811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:47.520560 containerd[1482]: time="2024-11-13T08:27:47.520039884Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 1.980237919s" Nov 13 08:27:47.520560 containerd[1482]: time="2024-11-13T08:27:47.520079034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 13 08:27:47.520970 containerd[1482]: time="2024-11-13T08:27:47.520919767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 13 08:27:48.944673 containerd[1482]: time="2024-11-13T08:27:48.943422520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:48.945418 containerd[1482]: time="2024-11-13T08:27:48.945351623Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18657606" Nov 13 08:27:48.946778 containerd[1482]: time="2024-11-13T08:27:48.946728132Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:48.950181 containerd[1482]: time="2024-11-13T08:27:48.950143719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:48.951303 containerd[1482]: time="2024-11-13T08:27:48.951257420Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 1.430192276s" Nov 13 08:27:48.951400 containerd[1482]: time="2024-11-13T08:27:48.951306799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 13 08:27:48.952630 containerd[1482]: time="2024-11-13T08:27:48.952602359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 13 08:27:48.954517 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 13 08:27:50.171950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992816797.mount: Deactivated successfully. Nov 13 08:27:50.843858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 13 08:27:50.851054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:50.915155 containerd[1482]: time="2024-11-13T08:27:50.914319097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:51.011865 containerd[1482]: time="2024-11-13T08:27:51.011749512Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30226814" Nov 13 08:27:51.015937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:51.022949 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 13 08:27:51.029317 containerd[1482]: time="2024-11-13T08:27:51.029258328Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:51.032220 containerd[1482]: time="2024-11-13T08:27:51.032151431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:51.042472 containerd[1482]: time="2024-11-13T08:27:51.042346999Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 2.089598086s" Nov 13 08:27:51.042673 containerd[1482]: time="2024-11-13T08:27:51.042645348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 13 08:27:51.044515 containerd[1482]: time="2024-11-13T08:27:51.044456732Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 13 08:27:51.094499 kubelet[1981]: E1113 08:27:51.094423 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 13 08:27:51.098794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 13 08:27:51.099002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 13 08:27:51.560276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625047731.mount: Deactivated successfully. Nov 13 08:27:52.002949 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 13 08:27:52.568853 containerd[1482]: time="2024-11-13T08:27:52.568786825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:52.570212 containerd[1482]: time="2024-11-13T08:27:52.570152658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 13 08:27:52.571632 containerd[1482]: time="2024-11-13T08:27:52.571569488Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:52.574637 containerd[1482]: time="2024-11-13T08:27:52.574591190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:52.576779 containerd[1482]: time="2024-11-13T08:27:52.576030606Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.531515647s" Nov 13 08:27:52.576779 containerd[1482]: time="2024-11-13T08:27:52.576079323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 13 08:27:52.577137 containerd[1482]: time="2024-11-13T08:27:52.577106076Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 13 08:27:53.038339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818360213.mount: Deactivated successfully. Nov 13 08:27:53.047195 containerd[1482]: time="2024-11-13T08:27:53.047101317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:53.048931 containerd[1482]: time="2024-11-13T08:27:53.048837221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 13 08:27:53.050689 containerd[1482]: time="2024-11-13T08:27:53.050593236Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:53.053449 containerd[1482]: time="2024-11-13T08:27:53.053385447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:53.054507 containerd[1482]: time="2024-11-13T08:27:53.054309117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.1605ms" Nov 13 08:27:53.054507 containerd[1482]: time="2024-11-13T08:27:53.054361886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 13 08:27:53.055614 containerd[1482]: time="2024-11-13T08:27:53.055342756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 13 08:27:53.600313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642003573.mount: Deactivated successfully. Nov 13 08:27:55.698654 containerd[1482]: time="2024-11-13T08:27:55.698584094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:55.700000 containerd[1482]: time="2024-11-13T08:27:55.699936181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779650" Nov 13 08:27:55.700603 containerd[1482]: time="2024-11-13T08:27:55.700558514Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:55.704371 containerd[1482]: time="2024-11-13T08:27:55.704315688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:27:55.705770 containerd[1482]: time="2024-11-13T08:27:55.705586770Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.650198188s" Nov 13 08:27:55.705770 containerd[1482]: time="2024-11-13T08:27:55.705632396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 13 08:27:59.046049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:59.056182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:59.095977 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-9.scope)... Nov 13 08:27:59.095998 systemd[1]: Reloading... Nov 13 08:27:59.215835 zram_generator::config[2154]: No configuration found. Nov 13 08:27:59.349428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:27:59.467131 systemd[1]: Reloading finished in 370 ms. Nov 13 08:27:59.537888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:59.543484 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:59.545419 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 08:27:59.545861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:59.552080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:27:59.696957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:27:59.709266 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:27:59.769825 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:27:59.769825 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:27:59.769825 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:27:59.770366 kubelet[2212]: I1113 08:27:59.769977 2212 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:28:00.662742 kubelet[2212]: I1113 08:28:00.662656 2212 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 13 08:28:00.662977 kubelet[2212]: I1113 08:28:00.662803 2212 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:28:00.663290 kubelet[2212]: I1113 08:28:00.663257 2212 server.go:929] "Client rotation is on, will bootstrap in background" Nov 13 08:28:00.694587 kubelet[2212]: E1113 08:28:00.694541 2212 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.149.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:00.695519 kubelet[2212]: I1113 08:28:00.695329 2212 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:28:00.709133 kubelet[2212]: E1113 08:28:00.709078 2212 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 13 08:28:00.709133 kubelet[2212]: I1113 08:28:00.709129 2212 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 13 08:28:00.715285 kubelet[2212]: I1113 08:28:00.715221 2212 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:28:00.715531 kubelet[2212]: I1113 08:28:00.715394 2212 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 13 08:28:00.715688 kubelet[2212]: I1113 08:28:00.715625 2212 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:28:00.716026 kubelet[2212]: I1113 08:28:00.715681 2212 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.0.0-f-d2466dff01","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 13 08:28:00.716185 kubelet[2212]: I1113 08:28:00.716040 2212 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:28:00.716185 kubelet[2212]: I1113 08:28:00.716059 2212 container_manager_linux.go:300] "Creating device plugin manager" Nov 13 08:28:00.716268 kubelet[2212]: I1113 08:28:00.716237 2212 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:00.719383 kubelet[2212]: I1113 08:28:00.718983 2212 kubelet.go:408] "Attempting to sync node with API server" Nov 13 08:28:00.719383 kubelet[2212]: I1113 08:28:00.719042 2212 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:28:00.719383 kubelet[2212]: I1113 08:28:00.719095 2212 kubelet.go:314] "Adding apiserver pod source" Nov 13 08:28:00.719383 kubelet[2212]: I1113 08:28:00.719122 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:28:00.725039 kubelet[2212]: W1113 08:28:00.724348 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.149.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-f-d2466dff01&limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:00.725039 kubelet[2212]: E1113 08:28:00.724458 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.149.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-f-d2466dff01&limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:00.725588 kubelet[2212]: W1113 08:28:00.725558 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.149.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:00.726432 kubelet[2212]: E1113 08:28:00.726185 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.149.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:00.726787 kubelet[2212]: I1113 08:28:00.726620 2212 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:28:00.729033 kubelet[2212]: I1113 08:28:00.728998 2212 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:28:00.732745 kubelet[2212]: W1113 08:28:00.732307 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 13 08:28:00.734017 kubelet[2212]: I1113 08:28:00.733986 2212 server.go:1269] "Started kubelet" Nov 13 08:28:00.736727 kubelet[2212]: I1113 08:28:00.736652 2212 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:28:00.738875 kubelet[2212]: I1113 08:28:00.738269 2212 server.go:460] "Adding debug handlers to kubelet server" Nov 13 08:28:00.739374 kubelet[2212]: I1113 08:28:00.739314 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:28:00.739786 kubelet[2212]: I1113 08:28:00.739766 2212 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:28:00.742514 kubelet[2212]: I1113 08:28:00.741616 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:28:00.743847 kubelet[2212]: E1113 08:28:00.740091 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.149.40:6443/api/v1/namespaces/default/events\": dial tcp 64.23.149.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.0.0-f-d2466dff01.180779c7c989d006 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-f-d2466dff01,UID:ci-4152.0.0-f-d2466dff01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-f-d2466dff01,},FirstTimestamp:2024-11-13 08:28:00.733949958 +0000 UTC m=+1.019669290,LastTimestamp:2024-11-13 08:28:00.733949958 +0000 UTC m=+1.019669290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-f-d2466dff01,}" Nov 13 08:28:00.747187 kubelet[2212]: I1113 08:28:00.747143 2212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 13 08:28:00.751399 kubelet[2212]: E1113 08:28:00.751353 2212 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.0.0-f-d2466dff01\" not found" Nov 13 08:28:00.752111 kubelet[2212]: I1113 08:28:00.752089 2212 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 13 08:28:00.752744 kubelet[2212]: I1113 08:28:00.752725 2212 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 13 08:28:00.752913 kubelet[2212]: I1113 08:28:00.752902 2212 reconciler.go:26] "Reconciler: start to sync state" Nov 13 08:28:00.753681 kubelet[2212]: W1113 08:28:00.753566 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.149.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:00.753863 kubelet[2212]: E1113 08:28:00.753838 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.149.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:00.754039 kubelet[2212]: E1113 08:28:00.754011 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.149.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-f-d2466dff01?timeout=10s\": dial tcp 64.23.149.40:6443: connect: connection refused" interval="200ms" Nov 13 08:28:00.754352 kubelet[2212]: I1113 08:28:00.754332 2212 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:28:00.754525 kubelet[2212]: I1113 08:28:00.754506 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:28:00.755190 kubelet[2212]: E1113 08:28:00.755168 2212 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 08:28:00.757034 kubelet[2212]: I1113 08:28:00.757010 2212 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:28:00.781741 kubelet[2212]: I1113 08:28:00.779896 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:28:00.785287 kubelet[2212]: I1113 08:28:00.784753 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:28:00.785287 kubelet[2212]: I1113 08:28:00.784819 2212 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:28:00.785287 kubelet[2212]: I1113 08:28:00.784850 2212 kubelet.go:2321] "Starting kubelet main sync loop" Nov 13 08:28:00.785287 kubelet[2212]: E1113 08:28:00.784926 2212 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:28:00.789596 kubelet[2212]: W1113 08:28:00.789550 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.149.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:00.797878 kubelet[2212]: E1113 08:28:00.797819 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.149.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:00.803921 kubelet[2212]: I1113 08:28:00.803885 2212 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:28:00.804522 kubelet[2212]: I1113 08:28:00.804183 2212 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:28:00.804522 kubelet[2212]: I1113 08:28:00.804216 2212 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:00.808958 kubelet[2212]: I1113 08:28:00.808919 2212 policy_none.go:49] "None policy: Start" Nov 13 08:28:00.810305 kubelet[2212]: I1113 08:28:00.810272 2212 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:28:00.810881 kubelet[2212]: I1113 08:28:00.810445 2212 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:28:00.821945 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 13 08:28:00.837382 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 13 08:28:00.843195 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 13 08:28:00.852571 kubelet[2212]: E1113 08:28:00.852507 2212 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.0.0-f-d2466dff01\" not found" Nov 13 08:28:00.855331 kubelet[2212]: I1113 08:28:00.855295 2212 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:28:00.856118 kubelet[2212]: I1113 08:28:00.856099 2212 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 13 08:28:00.856276 kubelet[2212]: I1113 08:28:00.856239 2212 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 08:28:00.856644 kubelet[2212]: I1113 08:28:00.856627 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:28:00.858720 kubelet[2212]: E1113 08:28:00.858677 2212 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.0.0-f-d2466dff01\" not found" Nov 13 08:28:00.897455 systemd[1]: Created slice kubepods-burstable-pod36b635be0e1657d1608a62c2c758082e.slice - libcontainer container kubepods-burstable-pod36b635be0e1657d1608a62c2c758082e.slice. Nov 13 08:28:00.911619 systemd[1]: Created slice kubepods-burstable-podbc4a314b35885102c20ea79d3af04243.slice - libcontainer container kubepods-burstable-podbc4a314b35885102c20ea79d3af04243.slice. Nov 13 08:28:00.924501 systemd[1]: Created slice kubepods-burstable-pod1a4e932bd6248aa83bbe96afa93ef72c.slice - libcontainer container kubepods-burstable-pod1a4e932bd6248aa83bbe96afa93ef72c.slice. Nov 13 08:28:00.954664 kubelet[2212]: E1113 08:28:00.954603 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.149.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-f-d2466dff01?timeout=10s\": dial tcp 64.23.149.40:6443: connect: connection refused" interval="400ms" Nov 13 08:28:00.958227 kubelet[2212]: I1113 08:28:00.958158 2212 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:00.958896 kubelet[2212]: E1113 08:28:00.958840 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.149.40:6443/api/v1/nodes\": dial tcp 64.23.149.40:6443: connect: connection refused" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054411 kubelet[2212]: I1113 08:28:01.054326 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054636 kubelet[2212]: I1113 08:28:01.054484 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054636 kubelet[2212]: I1113 08:28:01.054570 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054636 kubelet[2212]: I1113 08:28:01.054601 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054808 kubelet[2212]: I1113 08:28:01.054654 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054808 kubelet[2212]: I1113 08:28:01.054679 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36b635be0e1657d1608a62c2c758082e-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-f-d2466dff01\" (UID: \"36b635be0e1657d1608a62c2c758082e\") " pod="kube-system/kube-scheduler-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054808 kubelet[2212]: I1113 08:28:01.054729 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054808 kubelet[2212]: I1113 08:28:01.054745 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.054808 kubelet[2212]: I1113 08:28:01.054779 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.160593 kubelet[2212]: I1113 08:28:01.160539 2212 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.161209 kubelet[2212]: E1113 08:28:01.161151 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.149.40:6443/api/v1/nodes\": dial tcp 64.23.149.40:6443: connect: connection refused" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.207275 kubelet[2212]: E1113 08:28:01.207055 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:01.211787 containerd[1482]: time="2024-11-13T08:28:01.211672123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-f-d2466dff01,Uid:36b635be0e1657d1608a62c2c758082e,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:01.215276 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 13 08:28:01.222486 kubelet[2212]: E1113 08:28:01.222081 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:01.227249 containerd[1482]: time="2024-11-13T08:28:01.226842170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-f-d2466dff01,Uid:bc4a314b35885102c20ea79d3af04243,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:01.230894 kubelet[2212]: E1113 08:28:01.228654 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:01.231072 containerd[1482]: time="2024-11-13T08:28:01.229264149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-f-d2466dff01,Uid:1a4e932bd6248aa83bbe96afa93ef72c,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:01.355558 kubelet[2212]: E1113 08:28:01.355445 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.149.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-f-d2466dff01?timeout=10s\": dial tcp 64.23.149.40:6443: connect: connection refused" interval="800ms" Nov 13 08:28:01.564001 kubelet[2212]: I1113 08:28:01.563372 2212 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.564433 kubelet[2212]: E1113 08:28:01.564292 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.149.40:6443/api/v1/nodes\": dial tcp 64.23.149.40:6443: connect: connection refused" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:01.836200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108267507.mount: Deactivated successfully. Nov 13 08:28:01.846771 containerd[1482]: time="2024-11-13T08:28:01.846448298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:28:01.850264 containerd[1482]: time="2024-11-13T08:28:01.850190786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 13 08:28:01.852118 containerd[1482]: time="2024-11-13T08:28:01.851917214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:28:01.852871 containerd[1482]: time="2024-11-13T08:28:01.852834836Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:28:01.855542 containerd[1482]: time="2024-11-13T08:28:01.855497961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:28:01.856661 containerd[1482]: time="2024-11-13T08:28:01.856501453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:28:01.856820 containerd[1482]: time="2024-11-13T08:28:01.856782082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 13 08:28:01.857385 containerd[1482]: time="2024-11-13T08:28:01.857356171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 13 08:28:01.860789 containerd[1482]: time="2024-11-13T08:28:01.860740252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.746688ms" Nov 13 08:28:01.866505 containerd[1482]: time="2024-11-13T08:28:01.866080805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.224229ms" Nov 13 08:28:01.876770 containerd[1482]: time="2024-11-13T08:28:01.876666430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.284024ms" Nov 13 08:28:01.938313 kubelet[2212]: E1113 08:28:01.936495 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.149.40:6443/api/v1/namespaces/default/events\": dial tcp 64.23.149.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.0.0-f-d2466dff01.180779c7c989d006 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.0.0-f-d2466dff01,UID:ci-4152.0.0-f-d2466dff01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.0.0-f-d2466dff01,},FirstTimestamp:2024-11-13 08:28:00.733949958 +0000 UTC m=+1.019669290,LastTimestamp:2024-11-13 08:28:00.733949958 +0000 UTC m=+1.019669290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.0.0-f-d2466dff01,}" Nov 13 08:28:01.947750 kubelet[2212]: W1113 08:28:01.947593 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.149.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:01.947750 kubelet[2212]: E1113 08:28:01.947665 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.149.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:01.994288 kubelet[2212]: W1113 08:28:01.994104 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.149.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-f-d2466dff01&limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:01.994288 kubelet[2212]: E1113 08:28:01.994218 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.149.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.0.0-f-d2466dff01&limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:02.003805 kubelet[2212]: W1113 08:28:02.003295 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.149.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:02.004361 kubelet[2212]: E1113 08:28:02.004282 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.149.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:02.089277 containerd[1482]: time="2024-11-13T08:28:02.088767105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:02.091263 containerd[1482]: time="2024-11-13T08:28:02.090978171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:02.092312 containerd[1482]: time="2024-11-13T08:28:02.092025161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:02.093322 containerd[1482]: time="2024-11-13T08:28:02.091211700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.093501 containerd[1482]: time="2024-11-13T08:28:02.093293070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.094147 containerd[1482]: time="2024-11-13T08:28:02.094087062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:02.094326 containerd[1482]: time="2024-11-13T08:28:02.094293240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.095923 containerd[1482]: time="2024-11-13T08:28:02.094526955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.098436 containerd[1482]: time="2024-11-13T08:28:02.095204933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:02.098436 containerd[1482]: time="2024-11-13T08:28:02.095290774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:02.098436 containerd[1482]: time="2024-11-13T08:28:02.095309717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.098436 containerd[1482]: time="2024-11-13T08:28:02.095446827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:02.129351 systemd[1]: Started cri-containerd-f2e55e50982cea89244438ff847303f60bedf5d0200f68b61eaf4d97e35fcba0.scope - libcontainer container f2e55e50982cea89244438ff847303f60bedf5d0200f68b61eaf4d97e35fcba0. Nov 13 08:28:02.154276 systemd[1]: Started cri-containerd-bfc6a83f5b6bd7f0fb3f5050e8ffa6d56fc9424290b46db3f71dea31dcb5d510.scope - libcontainer container bfc6a83f5b6bd7f0fb3f5050e8ffa6d56fc9424290b46db3f71dea31dcb5d510. Nov 13 08:28:02.157351 systemd[1]: Started cri-containerd-d3d3222df570a43df9b3a06d0e225619de4fbc219a1e4f51c107cd8242f1f59e.scope - libcontainer container d3d3222df570a43df9b3a06d0e225619de4fbc219a1e4f51c107cd8242f1f59e. Nov 13 08:28:02.158666 kubelet[2212]: E1113 08:28:02.158584 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.149.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.0.0-f-d2466dff01?timeout=10s\": dial tcp 64.23.149.40:6443: connect: connection refused" interval="1.6s" Nov 13 08:28:02.250963 containerd[1482]: time="2024-11-13T08:28:02.250719177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.0.0-f-d2466dff01,Uid:bc4a314b35885102c20ea79d3af04243,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfc6a83f5b6bd7f0fb3f5050e8ffa6d56fc9424290b46db3f71dea31dcb5d510\"" Nov 13 08:28:02.254457 kubelet[2212]: E1113 08:28:02.254419 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:02.264582 containerd[1482]: time="2024-11-13T08:28:02.264343950Z" level=info msg="CreateContainer within sandbox \"bfc6a83f5b6bd7f0fb3f5050e8ffa6d56fc9424290b46db3f71dea31dcb5d510\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 13 08:28:02.269234 containerd[1482]: time="2024-11-13T08:28:02.268697347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.0.0-f-d2466dff01,Uid:1a4e932bd6248aa83bbe96afa93ef72c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3d3222df570a43df9b3a06d0e225619de4fbc219a1e4f51c107cd8242f1f59e\"" Nov 13 08:28:02.271154 kubelet[2212]: E1113 08:28:02.271118 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:02.274225 containerd[1482]: time="2024-11-13T08:28:02.273945642Z" level=info msg="CreateContainer within sandbox \"d3d3222df570a43df9b3a06d0e225619de4fbc219a1e4f51c107cd8242f1f59e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 13 08:28:02.280228 containerd[1482]: time="2024-11-13T08:28:02.279947252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.0.0-f-d2466dff01,Uid:36b635be0e1657d1608a62c2c758082e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e55e50982cea89244438ff847303f60bedf5d0200f68b61eaf4d97e35fcba0\"" Nov 13 08:28:02.281997 kubelet[2212]: E1113 08:28:02.281547 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:02.284378 containerd[1482]: time="2024-11-13T08:28:02.284321701Z" level=info msg="CreateContainer within sandbox \"f2e55e50982cea89244438ff847303f60bedf5d0200f68b61eaf4d97e35fcba0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 13 08:28:02.311887 containerd[1482]: time="2024-11-13T08:28:02.311820922Z" level=info msg="CreateContainer within sandbox \"d3d3222df570a43df9b3a06d0e225619de4fbc219a1e4f51c107cd8242f1f59e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34a84780e0365a654db4eef6d917cd98a2d980f59d18ede91079c78b9e87cd25\"" Nov 13 08:28:02.312866 containerd[1482]: time="2024-11-13T08:28:02.312735096Z" level=info msg="CreateContainer within sandbox \"bfc6a83f5b6bd7f0fb3f5050e8ffa6d56fc9424290b46db3f71dea31dcb5d510\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"86ceb8f6379621f67380bd4b1e6b8c56543e841b4e720779c044d888f8a19f27\"" Nov 13 08:28:02.312866 containerd[1482]: time="2024-11-13T08:28:02.312775429Z" level=info msg="StartContainer for \"34a84780e0365a654db4eef6d917cd98a2d980f59d18ede91079c78b9e87cd25\"" Nov 13 08:28:02.313513 containerd[1482]: time="2024-11-13T08:28:02.313457706Z" level=info msg="StartContainer for \"86ceb8f6379621f67380bd4b1e6b8c56543e841b4e720779c044d888f8a19f27\"" Nov 13 08:28:02.321937 containerd[1482]: time="2024-11-13T08:28:02.321781736Z" level=info msg="CreateContainer within sandbox \"f2e55e50982cea89244438ff847303f60bedf5d0200f68b61eaf4d97e35fcba0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8f1c3c695ed9eb5a90a424fc3cff1ac54a053b084e2ca52daa07f86e21fac77\"" Nov 13 08:28:02.323394 containerd[1482]: time="2024-11-13T08:28:02.323272975Z" level=info msg="StartContainer for \"c8f1c3c695ed9eb5a90a424fc3cff1ac54a053b084e2ca52daa07f86e21fac77\"" Nov 13 08:28:02.331826 kubelet[2212]: W1113 08:28:02.331604 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.149.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.149.40:6443: connect: connection refused Nov 13 08:28:02.331826 kubelet[2212]: E1113 08:28:02.331728 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.149.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.149.40:6443: connect: connection refused" logger="UnhandledError" Nov 13 08:28:02.364401 systemd[1]: Started cri-containerd-34a84780e0365a654db4eef6d917cd98a2d980f59d18ede91079c78b9e87cd25.scope - libcontainer container 34a84780e0365a654db4eef6d917cd98a2d980f59d18ede91079c78b9e87cd25. Nov 13 08:28:02.368536 kubelet[2212]: I1113 08:28:02.368480 2212 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:02.369978 kubelet[2212]: E1113 08:28:02.369803 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.149.40:6443/api/v1/nodes\": dial tcp 64.23.149.40:6443: connect: connection refused" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:02.394010 systemd[1]: Started cri-containerd-86ceb8f6379621f67380bd4b1e6b8c56543e841b4e720779c044d888f8a19f27.scope - libcontainer container 86ceb8f6379621f67380bd4b1e6b8c56543e841b4e720779c044d888f8a19f27. Nov 13 08:28:02.404072 systemd[1]: Started cri-containerd-c8f1c3c695ed9eb5a90a424fc3cff1ac54a053b084e2ca52daa07f86e21fac77.scope - libcontainer container c8f1c3c695ed9eb5a90a424fc3cff1ac54a053b084e2ca52daa07f86e21fac77. Nov 13 08:28:02.491051 containerd[1482]: time="2024-11-13T08:28:02.490198579Z" level=info msg="StartContainer for \"86ceb8f6379621f67380bd4b1e6b8c56543e841b4e720779c044d888f8a19f27\" returns successfully" Nov 13 08:28:02.517010 containerd[1482]: time="2024-11-13T08:28:02.516928928Z" level=info msg="StartContainer for \"34a84780e0365a654db4eef6d917cd98a2d980f59d18ede91079c78b9e87cd25\" returns successfully" Nov 13 08:28:02.584131 containerd[1482]: time="2024-11-13T08:28:02.584060774Z" level=info msg="StartContainer for \"c8f1c3c695ed9eb5a90a424fc3cff1ac54a053b084e2ca52daa07f86e21fac77\" returns successfully" Nov 13 08:28:02.816791 kubelet[2212]: E1113 08:28:02.815540 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:02.823982 kubelet[2212]: E1113 08:28:02.822280 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:02.837629 kubelet[2212]: E1113 08:28:02.837583 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:03.836760 kubelet[2212]: E1113 08:28:03.835222 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:03.972614 kubelet[2212]: I1113 08:28:03.972004 2212 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:04.704090 kubelet[2212]: E1113 08:28:04.704017 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.0.0-f-d2466dff01\" not found" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:04.728469 kubelet[2212]: I1113 08:28:04.728421 2212 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:04.728469 kubelet[2212]: E1113 08:28:04.728477 2212 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4152.0.0-f-d2466dff01\": node \"ci-4152.0.0-f-d2466dff01\" not found" Nov 13 08:28:04.740914 kubelet[2212]: I1113 08:28:04.740821 2212 apiserver.go:52] "Watching apiserver" Nov 13 08:28:04.753873 kubelet[2212]: I1113 08:28:04.753840 2212 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 13 08:28:07.693018 systemd[1]: Reloading requested from client PID 2489 ('systemctl') (unit session-9.scope)... Nov 13 08:28:07.693052 systemd[1]: Reloading... Nov 13 08:28:07.908774 zram_generator::config[2528]: No configuration found. Nov 13 08:28:08.155257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 13 08:28:08.294532 systemd[1]: Reloading finished in 600 ms. Nov 13 08:28:08.365989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:28:08.384795 systemd[1]: kubelet.service: Deactivated successfully. Nov 13 08:28:08.385121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:28:08.385217 systemd[1]: kubelet.service: Consumed 1.477s CPU time, 110.5M memory peak, 0B memory swap peak. Nov 13 08:28:08.391852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 13 08:28:08.644018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 13 08:28:08.657012 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 13 08:28:08.828397 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:28:08.828397 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 13 08:28:08.828397 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 13 08:28:08.829400 kubelet[2579]: I1113 08:28:08.828534 2579 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 13 08:28:08.841246 kubelet[2579]: I1113 08:28:08.841149 2579 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 13 08:28:08.841461 kubelet[2579]: I1113 08:28:08.841392 2579 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 13 08:28:08.846603 kubelet[2579]: I1113 08:28:08.842515 2579 server.go:929] "Client rotation is on, will bootstrap in background" Nov 13 08:28:08.852027 kubelet[2579]: I1113 08:28:08.851983 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 13 08:28:08.883257 kubelet[2579]: I1113 08:28:08.882068 2579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 13 08:28:08.896155 kubelet[2579]: E1113 08:28:08.896096 2579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 13 08:28:08.896155 kubelet[2579]: I1113 08:28:08.896145 2579 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 13 08:28:08.900222 sudo[2592]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 13 08:28:08.901109 sudo[2592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 13 08:28:08.903029 kubelet[2579]: I1113 08:28:08.902965 2579 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 13 08:28:08.903195 kubelet[2579]: I1113 08:28:08.903167 2579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 13 08:28:08.903434 kubelet[2579]: I1113 08:28:08.903322 2579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 13 08:28:08.903809 kubelet[2579]: I1113 08:28:08.903376 2579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.0.0-f-d2466dff01","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 13 08:28:08.903809 kubelet[2579]: I1113 08:28:08.903674 2579 topology_manager.go:138] "Creating topology manager with none policy" Nov 13 08:28:08.903809 kubelet[2579]: I1113 08:28:08.903689 2579 container_manager_linux.go:300] "Creating device plugin manager" Nov 13 08:28:08.903809 kubelet[2579]: I1113 08:28:08.903787 2579 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:08.905094 kubelet[2579]: I1113 08:28:08.903991 2579 kubelet.go:408] "Attempting to sync node with API server" Nov 13 08:28:08.905094 kubelet[2579]: I1113 08:28:08.904962 2579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 13 08:28:08.905094 kubelet[2579]: I1113 08:28:08.905011 2579 kubelet.go:314] "Adding apiserver pod source" Nov 13 08:28:08.905094 kubelet[2579]: I1113 08:28:08.905029 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 13 08:28:08.912319 kubelet[2579]: I1113 08:28:08.910835 2579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 13 08:28:08.912319 kubelet[2579]: I1113 08:28:08.911499 2579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 13 08:28:08.913426 kubelet[2579]: I1113 08:28:08.913382 2579 server.go:1269] "Started kubelet" Nov 13 08:28:08.940266 kubelet[2579]: I1113 08:28:08.940129 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 13 08:28:08.940907 kubelet[2579]: I1113 08:28:08.940793 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 13 08:28:08.947789 kubelet[2579]: I1113 08:28:08.947750 2579 server.go:460] "Adding debug handlers to kubelet server" Nov 13 08:28:08.951128 kubelet[2579]: I1113 08:28:08.951021 2579 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 13 08:28:08.955424 kubelet[2579]: I1113 08:28:08.955377 2579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 13 08:28:08.955633 kubelet[2579]: I1113 08:28:08.955614 2579 reconciler.go:26] "Reconciler: start to sync state" Nov 13 08:28:08.955758 kubelet[2579]: I1113 08:28:08.955395 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 13 08:28:08.956127 kubelet[2579]: I1113 08:28:08.956100 2579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 13 08:28:08.957756 kubelet[2579]: I1113 08:28:08.956540 2579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 13 08:28:08.959672 kubelet[2579]: E1113 08:28:08.959622 2579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152.0.0-f-d2466dff01\" not found" Nov 13 08:28:08.974440 kubelet[2579]: I1113 08:28:08.974384 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 13 08:28:08.976527 kubelet[2579]: I1113 08:28:08.976483 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 13 08:28:08.976811 kubelet[2579]: I1113 08:28:08.976797 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 13 08:28:08.976947 kubelet[2579]: I1113 08:28:08.976931 2579 kubelet.go:2321] "Starting kubelet main sync loop" Nov 13 08:28:08.977102 kubelet[2579]: E1113 08:28:08.977078 2579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 13 08:28:08.979171 kubelet[2579]: I1113 08:28:08.979122 2579 factory.go:221] Registration of the systemd container factory successfully Nov 13 08:28:08.979327 kubelet[2579]: I1113 08:28:08.979267 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 13 08:28:09.010574 kubelet[2579]: E1113 08:28:09.010527 2579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 13 08:28:09.011335 kubelet[2579]: I1113 08:28:09.011297 2579 factory.go:221] Registration of the containerd container factory successfully Nov 13 08:28:09.077809 kubelet[2579]: E1113 08:28:09.077757 2579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 13 08:28:09.124645 kubelet[2579]: I1113 08:28:09.124588 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 13 08:28:09.124645 kubelet[2579]: I1113 08:28:09.124616 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 13 08:28:09.124645 kubelet[2579]: I1113 08:28:09.124643 2579 state_mem.go:36] "Initialized new in-memory state store" Nov 13 08:28:09.125248 kubelet[2579]: I1113 08:28:09.124920 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 13 08:28:09.125248 kubelet[2579]: I1113 08:28:09.124945 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 13 08:28:09.125248 kubelet[2579]: I1113 08:28:09.124970 2579 policy_none.go:49] "None policy: Start" Nov 13 08:28:09.127000 kubelet[2579]: I1113 08:28:09.126584 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 13 08:28:09.127000 kubelet[2579]: I1113 08:28:09.126633 2579 state_mem.go:35] "Initializing new in-memory state store" Nov 13 08:28:09.127688 kubelet[2579]: I1113 08:28:09.127071 2579 state_mem.go:75] "Updated machine memory state" Nov 13 08:28:09.135043 kubelet[2579]: I1113 08:28:09.133987 2579 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 13 08:28:09.136261 kubelet[2579]: I1113 08:28:09.135468 2579 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 13 08:28:09.137577 kubelet[2579]: I1113 08:28:09.136444 2579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 13 08:28:09.137577 kubelet[2579]: I1113 08:28:09.136855 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 13 08:28:09.248309 kubelet[2579]: I1113 08:28:09.248084 2579 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.268551 kubelet[2579]: I1113 08:28:09.266137 2579 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.268551 kubelet[2579]: I1113 08:28:09.266258 2579 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.305065 kubelet[2579]: W1113 08:28:09.304765 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:09.309434 kubelet[2579]: W1113 08:28:09.309364 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:09.311161 kubelet[2579]: W1113 08:28:09.310139 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:09.360540 kubelet[2579]: I1113 08:28:09.360079 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.360540 kubelet[2579]: I1113 08:28:09.360151 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-ca-certs\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.360540 kubelet[2579]: I1113 08:28:09.360191 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.360540 kubelet[2579]: I1113 08:28:09.360221 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.360540 kubelet[2579]: I1113 08:28:09.360250 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36b635be0e1657d1608a62c2c758082e-kubeconfig\") pod \"kube-scheduler-ci-4152.0.0-f-d2466dff01\" (UID: \"36b635be0e1657d1608a62c2c758082e\") " pod="kube-system/kube-scheduler-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.361212 kubelet[2579]: I1113 08:28:09.360274 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-ca-certs\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.361212 kubelet[2579]: I1113 08:28:09.360300 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-kubeconfig\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.361212 kubelet[2579]: I1113 08:28:09.360324 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc4a314b35885102c20ea79d3af04243-k8s-certs\") pod \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" (UID: \"bc4a314b35885102c20ea79d3af04243\") " pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.361780 kubelet[2579]: I1113 08:28:09.360353 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a4e932bd6248aa83bbe96afa93ef72c-k8s-certs\") pod \"kube-controller-manager-ci-4152.0.0-f-d2466dff01\" (UID: \"1a4e932bd6248aa83bbe96afa93ef72c\") " pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:09.605821 kubelet[2579]: E1113 08:28:09.605533 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:09.612510 kubelet[2579]: E1113 08:28:09.612462 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:09.612955 kubelet[2579]: E1113 08:28:09.612670 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:09.924748 kubelet[2579]: I1113 08:28:09.924580 2579 apiserver.go:52] "Watching apiserver" Nov 13 08:28:09.948444 sudo[2592]: pam_unix(sudo:session): session closed for user root Nov 13 08:28:09.956009 kubelet[2579]: I1113 08:28:09.955899 2579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 13 08:28:10.068319 kubelet[2579]: E1113 08:28:10.067518 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:10.068538 kubelet[2579]: E1113 08:28:10.068507 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:10.081230 kubelet[2579]: W1113 08:28:10.081192 2579 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 13 08:28:10.082378 kubelet[2579]: E1113 08:28:10.082252 2579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.0.0-f-d2466dff01\" already exists" pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" Nov 13 08:28:10.083258 kubelet[2579]: E1113 08:28:10.083238 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:10.213805 kubelet[2579]: I1113 08:28:10.213560 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.0.0-f-d2466dff01" podStartSLOduration=1.213531609 podStartE2EDuration="1.213531609s" podCreationTimestamp="2024-11-13 08:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:10.184214768 +0000 UTC m=+1.465740438" watchObservedRunningTime="2024-11-13 08:28:10.213531609 +0000 UTC m=+1.495057266" Nov 13 08:28:10.264066 kubelet[2579]: I1113 08:28:10.263917 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.0.0-f-d2466dff01" podStartSLOduration=1.263897853 podStartE2EDuration="1.263897853s" podCreationTimestamp="2024-11-13 08:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:10.215211956 +0000 UTC m=+1.496737648" watchObservedRunningTime="2024-11-13 08:28:10.263897853 +0000 UTC m=+1.545423514" Nov 13 08:28:10.281578 kubelet[2579]: I1113 08:28:10.281281 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.0.0-f-d2466dff01" podStartSLOduration=1.28125547 podStartE2EDuration="1.28125547s" podCreationTimestamp="2024-11-13 08:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:10.265247166 +0000 UTC m=+1.546772830" watchObservedRunningTime="2024-11-13 08:28:10.28125547 +0000 UTC m=+1.562781140" Nov 13 08:28:11.069012 kubelet[2579]: E1113 08:28:11.068328 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:11.071254 kubelet[2579]: E1113 08:28:11.070249 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:11.694671 sudo[1691]: pam_unix(sudo:session): session closed for user root Nov 13 08:28:11.699237 sshd[1688]: Connection closed by 139.178.89.65 port 57010 Nov 13 08:28:11.700616 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:11.705059 systemd[1]: sshd@8-64.23.149.40:22-139.178.89.65:57010.service: Deactivated successfully. Nov 13 08:28:11.708148 systemd[1]: session-9.scope: Deactivated successfully. Nov 13 08:28:11.708602 systemd[1]: session-9.scope: Consumed 6.237s CPU time, 147.2M memory peak, 0B memory swap peak. Nov 13 08:28:11.711335 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Nov 13 08:28:11.713774 systemd-logind[1457]: Removed session 9. Nov 13 08:28:12.713908 kubelet[2579]: I1113 08:28:12.713569 2579 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 13 08:28:12.714405 kubelet[2579]: I1113 08:28:12.714384 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 13 08:28:12.714444 containerd[1482]: time="2024-11-13T08:28:12.714142968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 13 08:28:13.049980 update_engine[1460]: I20241113 08:28:13.048625 1460 update_attempter.cc:509] Updating boot flags... Nov 13 08:28:13.092230 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2653) Nov 13 08:28:13.211766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2651) Nov 13 08:28:13.742747 systemd[1]: Created slice kubepods-besteffort-pode4cb3eaa_0046_4f88_b52c_c57b42c3899c.slice - libcontainer container kubepods-besteffort-pode4cb3eaa_0046_4f88_b52c_c57b42c3899c.slice. Nov 13 08:28:13.762132 systemd[1]: Created slice kubepods-burstable-pod679deea7_1e31_4199_9f95_aecaa1339cc0.slice - libcontainer container kubepods-burstable-pod679deea7_1e31_4199_9f95_aecaa1339cc0.slice. Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791191 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-net\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791256 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-hubble-tls\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791288 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-config-path\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791323 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4cb3eaa-0046-4f88-b52c-c57b42c3899c-xtables-lock\") pod \"kube-proxy-zmt4m\" (UID: \"e4cb3eaa-0046-4f88-b52c-c57b42c3899c\") " pod="kube-system/kube-proxy-zmt4m" Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791347 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4cb3eaa-0046-4f88-b52c-c57b42c3899c-lib-modules\") pod \"kube-proxy-zmt4m\" (UID: \"e4cb3eaa-0046-4f88-b52c-c57b42c3899c\") " pod="kube-system/kube-proxy-zmt4m" Nov 13 08:28:13.792627 kubelet[2579]: I1113 08:28:13.791376 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cni-path\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793475 kubelet[2579]: I1113 08:28:13.791401 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/679deea7-1e31-4199-9f95-aecaa1339cc0-clustermesh-secrets\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793475 kubelet[2579]: I1113 08:28:13.791429 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklfq\" (UniqueName: \"kubernetes.io/projected/e4cb3eaa-0046-4f88-b52c-c57b42c3899c-kube-api-access-xklfq\") pod \"kube-proxy-zmt4m\" (UID: \"e4cb3eaa-0046-4f88-b52c-c57b42c3899c\") " pod="kube-system/kube-proxy-zmt4m" Nov 13 08:28:13.793475 kubelet[2579]: I1113 08:28:13.791466 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-bpf-maps\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793475 kubelet[2579]: I1113 08:28:13.791495 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-lib-modules\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793475 kubelet[2579]: I1113 08:28:13.791521 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-kernel\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791548 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68jz\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-kube-api-access-b68jz\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791574 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-hostproc\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791597 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-cgroup\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791660 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-etc-cni-netd\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791694 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-xtables-lock\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.793690 kubelet[2579]: I1113 08:28:13.791746 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4cb3eaa-0046-4f88-b52c-c57b42c3899c-kube-proxy\") pod \"kube-proxy-zmt4m\" (UID: \"e4cb3eaa-0046-4f88-b52c-c57b42c3899c\") " pod="kube-system/kube-proxy-zmt4m" Nov 13 08:28:13.795620 kubelet[2579]: I1113 08:28:13.791777 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-run\") pod \"cilium-jwdpp\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " pod="kube-system/cilium-jwdpp" Nov 13 08:28:13.814445 systemd[1]: Created slice kubepods-besteffort-podddbeb871_0476_4f5b_b1f2_18715a726bb9.slice - libcontainer container kubepods-besteffort-podddbeb871_0476_4f5b_b1f2_18715a726bb9.slice. Nov 13 08:28:13.892092 kubelet[2579]: I1113 08:28:13.892027 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgl2n\" (UniqueName: \"kubernetes.io/projected/ddbeb871-0476-4f5b-b1f2-18715a726bb9-kube-api-access-wgl2n\") pod \"cilium-operator-5d85765b45-pfqjj\" (UID: \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\") " pod="kube-system/cilium-operator-5d85765b45-pfqjj" Nov 13 08:28:13.892992 kubelet[2579]: I1113 08:28:13.892952 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbeb871-0476-4f5b-b1f2-18715a726bb9-cilium-config-path\") pod \"cilium-operator-5d85765b45-pfqjj\" (UID: \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\") " pod="kube-system/cilium-operator-5d85765b45-pfqjj" Nov 13 08:28:14.052405 kubelet[2579]: E1113 08:28:14.052235 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.053945 containerd[1482]: time="2024-11-13T08:28:14.053415498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmt4m,Uid:e4cb3eaa-0046-4f88-b52c-c57b42c3899c,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:14.066320 kubelet[2579]: E1113 08:28:14.066242 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.067071 containerd[1482]: time="2024-11-13T08:28:14.066998834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwdpp,Uid:679deea7-1e31-4199-9f95-aecaa1339cc0,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:14.099535 containerd[1482]: time="2024-11-13T08:28:14.099149454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:14.099535 containerd[1482]: time="2024-11-13T08:28:14.099276079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:14.099535 containerd[1482]: time="2024-11-13T08:28:14.099302140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.100871 containerd[1482]: time="2024-11-13T08:28:14.100769326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.117579 containerd[1482]: time="2024-11-13T08:28:14.117224533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:14.117579 containerd[1482]: time="2024-11-13T08:28:14.117367468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:14.117579 containerd[1482]: time="2024-11-13T08:28:14.117421773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.117873 containerd[1482]: time="2024-11-13T08:28:14.117693083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.121306 kubelet[2579]: E1113 08:28:14.120093 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.122738 containerd[1482]: time="2024-11-13T08:28:14.122658159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pfqjj,Uid:ddbeb871-0476-4f5b-b1f2-18715a726bb9,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:14.135634 systemd[1]: Started cri-containerd-0d274e13ec8d105ea3fbbdbf84a32780886c27ba31bf13a9c90f8fe1ec7a7460.scope - libcontainer container 0d274e13ec8d105ea3fbbdbf84a32780886c27ba31bf13a9c90f8fe1ec7a7460. Nov 13 08:28:14.183269 systemd[1]: Started cri-containerd-3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd.scope - libcontainer container 3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd. Nov 13 08:28:14.200199 containerd[1482]: time="2024-11-13T08:28:14.198237719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmt4m,Uid:e4cb3eaa-0046-4f88-b52c-c57b42c3899c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d274e13ec8d105ea3fbbdbf84a32780886c27ba31bf13a9c90f8fe1ec7a7460\"" Nov 13 08:28:14.201170 containerd[1482]: time="2024-11-13T08:28:14.200994050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:14.203165 containerd[1482]: time="2024-11-13T08:28:14.201242205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:14.203165 containerd[1482]: time="2024-11-13T08:28:14.201267138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.204417 kubelet[2579]: E1113 08:28:14.203861 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.206425 containerd[1482]: time="2024-11-13T08:28:14.202860695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:14.211271 containerd[1482]: time="2024-11-13T08:28:14.211039807Z" level=info msg="CreateContainer within sandbox \"0d274e13ec8d105ea3fbbdbf84a32780886c27ba31bf13a9c90f8fe1ec7a7460\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 13 08:28:14.264394 containerd[1482]: time="2024-11-13T08:28:14.264343441Z" level=info msg="CreateContainer within sandbox \"0d274e13ec8d105ea3fbbdbf84a32780886c27ba31bf13a9c90f8fe1ec7a7460\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"569741b7a874ca226ae07954ffea765b6cf1e140facde9a60d382458fd42d6f8\"" Nov 13 08:28:14.267161 containerd[1482]: time="2024-11-13T08:28:14.267097113Z" level=info msg="StartContainer for \"569741b7a874ca226ae07954ffea765b6cf1e140facde9a60d382458fd42d6f8\"" Nov 13 08:28:14.269323 systemd[1]: Started cri-containerd-226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f.scope - libcontainer container 226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f. Nov 13 08:28:14.280544 containerd[1482]: time="2024-11-13T08:28:14.280399037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwdpp,Uid:679deea7-1e31-4199-9f95-aecaa1339cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\"" Nov 13 08:28:14.284669 kubelet[2579]: E1113 08:28:14.282927 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.288096 containerd[1482]: time="2024-11-13T08:28:14.288046522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 13 08:28:14.342002 systemd[1]: Started cri-containerd-569741b7a874ca226ae07954ffea765b6cf1e140facde9a60d382458fd42d6f8.scope - libcontainer container 569741b7a874ca226ae07954ffea765b6cf1e140facde9a60d382458fd42d6f8. Nov 13 08:28:14.380988 containerd[1482]: time="2024-11-13T08:28:14.380945269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pfqjj,Uid:ddbeb871-0476-4f5b-b1f2-18715a726bb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\"" Nov 13 08:28:14.382792 kubelet[2579]: E1113 08:28:14.382539 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:14.422745 containerd[1482]: time="2024-11-13T08:28:14.421382110Z" level=info msg="StartContainer for \"569741b7a874ca226ae07954ffea765b6cf1e140facde9a60d382458fd42d6f8\" returns successfully" Nov 13 08:28:15.082096 kubelet[2579]: E1113 08:28:15.082009 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:15.744888 kubelet[2579]: E1113 08:28:15.744739 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:15.771744 kubelet[2579]: I1113 08:28:15.769956 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zmt4m" podStartSLOduration=2.769936511 podStartE2EDuration="2.769936511s" podCreationTimestamp="2024-11-13 08:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:15.100645934 +0000 UTC m=+6.382171591" watchObservedRunningTime="2024-11-13 08:28:15.769936511 +0000 UTC m=+7.051462167" Nov 13 08:28:16.083942 kubelet[2579]: E1113 08:28:16.083899 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:18.236742 kubelet[2579]: E1113 08:28:18.236014 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:19.091003 kubelet[2579]: E1113 08:28:19.090945 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:20.188762 kubelet[2579]: E1113 08:28:20.188688 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:23.448406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524833069.mount: Deactivated successfully. Nov 13 08:28:26.255890 containerd[1482]: time="2024-11-13T08:28:26.217430649Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Nov 13 08:28:26.258256 containerd[1482]: time="2024-11-13T08:28:26.258003444Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.969862516s" Nov 13 08:28:26.258256 containerd[1482]: time="2024-11-13T08:28:26.258054728Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 13 08:28:26.262000 containerd[1482]: time="2024-11-13T08:28:26.261303040Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 13 08:28:26.267929 containerd[1482]: time="2024-11-13T08:28:26.267874276Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:26.269448 containerd[1482]: time="2024-11-13T08:28:26.269025061Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:26.271487 containerd[1482]: time="2024-11-13T08:28:26.271440485Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:28:26.349424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182370920.mount: Deactivated successfully. Nov 13 08:28:26.357147 containerd[1482]: time="2024-11-13T08:28:26.357091601Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\"" Nov 13 08:28:26.358046 containerd[1482]: time="2024-11-13T08:28:26.357977777Z" level=info msg="StartContainer for \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\"" Nov 13 08:28:26.475973 systemd[1]: Started cri-containerd-46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e.scope - libcontainer container 46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e. Nov 13 08:28:26.536031 systemd[1]: cri-containerd-46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e.scope: Deactivated successfully. Nov 13 08:28:26.546607 containerd[1482]: time="2024-11-13T08:28:26.546546774Z" level=info msg="StartContainer for \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\" returns successfully" Nov 13 08:28:26.630019 containerd[1482]: time="2024-11-13T08:28:26.625067301Z" level=info msg="shim disconnected" id=46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e namespace=k8s.io Nov 13 08:28:26.630019 containerd[1482]: time="2024-11-13T08:28:26.630014619Z" level=warning msg="cleaning up after shim disconnected" id=46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e namespace=k8s.io Nov 13 08:28:26.630019 containerd[1482]: time="2024-11-13T08:28:26.630031690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:27.113964 kubelet[2579]: E1113 08:28:27.113833 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:27.119811 containerd[1482]: time="2024-11-13T08:28:27.119731629Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:28:27.150310 containerd[1482]: time="2024-11-13T08:28:27.150227386Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\"" Nov 13 08:28:27.152037 containerd[1482]: time="2024-11-13T08:28:27.151991667Z" level=info msg="StartContainer for \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\"" Nov 13 08:28:27.203099 systemd[1]: Started cri-containerd-8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97.scope - libcontainer container 8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97. Nov 13 08:28:27.246963 containerd[1482]: time="2024-11-13T08:28:27.246654431Z" level=info msg="StartContainer for \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\" returns successfully" Nov 13 08:28:27.263863 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 13 08:28:27.264237 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:28:27.264344 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:28:27.272120 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 13 08:28:27.272444 systemd[1]: cri-containerd-8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97.scope: Deactivated successfully. Nov 13 08:28:27.324677 containerd[1482]: time="2024-11-13T08:28:27.324315949Z" level=info msg="shim disconnected" id=8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97 namespace=k8s.io Nov 13 08:28:27.324677 containerd[1482]: time="2024-11-13T08:28:27.324390450Z" level=warning msg="cleaning up after shim disconnected" id=8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97 namespace=k8s.io Nov 13 08:28:27.324677 containerd[1482]: time="2024-11-13T08:28:27.324405859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:27.326785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 13 08:28:27.349221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e-rootfs.mount: Deactivated successfully. Nov 13 08:28:28.015361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3951474143.mount: Deactivated successfully. Nov 13 08:28:28.129275 kubelet[2579]: E1113 08:28:28.129225 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:28.159031 containerd[1482]: time="2024-11-13T08:28:28.154261882Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:28:28.237365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2072838251.mount: Deactivated successfully. Nov 13 08:28:28.246306 containerd[1482]: time="2024-11-13T08:28:28.246139565Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\"" Nov 13 08:28:28.249268 containerd[1482]: time="2024-11-13T08:28:28.248303312Z" level=info msg="StartContainer for \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\"" Nov 13 08:28:28.323988 systemd[1]: Started cri-containerd-b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da.scope - libcontainer container b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da. Nov 13 08:28:28.415941 systemd[1]: cri-containerd-b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da.scope: Deactivated successfully. Nov 13 08:28:28.420832 containerd[1482]: time="2024-11-13T08:28:28.420236768Z" level=info msg="StartContainer for \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\" returns successfully" Nov 13 08:28:28.494455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da-rootfs.mount: Deactivated successfully. Nov 13 08:28:28.500430 containerd[1482]: time="2024-11-13T08:28:28.500338090Z" level=info msg="shim disconnected" id=b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da namespace=k8s.io Nov 13 08:28:28.500975 containerd[1482]: time="2024-11-13T08:28:28.500848890Z" level=warning msg="cleaning up after shim disconnected" id=b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da namespace=k8s.io Nov 13 08:28:28.500975 containerd[1482]: time="2024-11-13T08:28:28.500876516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:28.526760 containerd[1482]: time="2024-11-13T08:28:28.525073843Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:28:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:28:29.137350 kubelet[2579]: E1113 08:28:29.137289 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:29.153091 containerd[1482]: time="2024-11-13T08:28:29.153035770Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:28:29.178184 containerd[1482]: time="2024-11-13T08:28:29.178107407Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:29.192595 containerd[1482]: time="2024-11-13T08:28:29.192540838Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 13 08:28:29.196129 containerd[1482]: time="2024-11-13T08:28:29.196066351Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\"" Nov 13 08:28:29.196318 containerd[1482]: time="2024-11-13T08:28:29.196221578Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Nov 13 08:28:29.197510 containerd[1482]: time="2024-11-13T08:28:29.197465693Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.936112019s" Nov 13 08:28:29.197688 containerd[1482]: time="2024-11-13T08:28:29.197665318Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 13 08:28:29.201758 containerd[1482]: time="2024-11-13T08:28:29.201531366Z" level=info msg="StartContainer for \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\"" Nov 13 08:28:29.205156 containerd[1482]: time="2024-11-13T08:28:29.205105055Z" level=info msg="CreateContainer within sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 13 08:28:29.253168 containerd[1482]: time="2024-11-13T08:28:29.253107366Z" level=info msg="CreateContainer within sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\"" Nov 13 08:28:29.254759 containerd[1482]: time="2024-11-13T08:28:29.254632761Z" level=info msg="StartContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\"" Nov 13 08:28:29.295042 systemd[1]: Started cri-containerd-d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669.scope - libcontainer container d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669. Nov 13 08:28:29.328060 systemd[1]: Started cri-containerd-b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798.scope - libcontainer container b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798. Nov 13 08:28:29.371157 systemd[1]: cri-containerd-d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669.scope: Deactivated successfully. Nov 13 08:28:29.380391 containerd[1482]: time="2024-11-13T08:28:29.380221498Z" level=info msg="StartContainer for \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\" returns successfully" Nov 13 08:28:29.429927 containerd[1482]: time="2024-11-13T08:28:29.427960569Z" level=info msg="StartContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" returns successfully" Nov 13 08:28:29.442978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669-rootfs.mount: Deactivated successfully. Nov 13 08:28:29.447805 containerd[1482]: time="2024-11-13T08:28:29.447150481Z" level=info msg="shim disconnected" id=d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669 namespace=k8s.io Nov 13 08:28:29.447805 containerd[1482]: time="2024-11-13T08:28:29.447406518Z" level=warning msg="cleaning up after shim disconnected" id=d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669 namespace=k8s.io Nov 13 08:28:29.447805 containerd[1482]: time="2024-11-13T08:28:29.447421016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:28:30.145785 kubelet[2579]: E1113 08:28:30.145557 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:30.153425 kubelet[2579]: E1113 08:28:30.153382 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:30.158785 containerd[1482]: time="2024-11-13T08:28:30.158652488Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:28:30.212948 containerd[1482]: time="2024-11-13T08:28:30.211106691Z" level=info msg="CreateContainer within sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\"" Nov 13 08:28:30.212948 containerd[1482]: time="2024-11-13T08:28:30.211786900Z" level=info msg="StartContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\"" Nov 13 08:28:30.213255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991518296.mount: Deactivated successfully. Nov 13 08:28:30.292017 systemd[1]: Started cri-containerd-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef.scope - libcontainer container 0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef. Nov 13 08:28:30.411524 kubelet[2579]: I1113 08:28:30.411030 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pfqjj" podStartSLOduration=2.594512285 podStartE2EDuration="17.411006023s" podCreationTimestamp="2024-11-13 08:28:13 +0000 UTC" firstStartedPulling="2024-11-13 08:28:14.384655004 +0000 UTC m=+5.666180651" lastFinishedPulling="2024-11-13 08:28:29.201148723 +0000 UTC m=+20.482674389" observedRunningTime="2024-11-13 08:28:30.295917389 +0000 UTC m=+21.577443041" watchObservedRunningTime="2024-11-13 08:28:30.411006023 +0000 UTC m=+21.692531678" Nov 13 08:28:30.421251 containerd[1482]: time="2024-11-13T08:28:30.421172389Z" level=info msg="StartContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" returns successfully" Nov 13 08:28:30.566683 systemd[1]: run-containerd-runc-k8s.io-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef-runc.QA2PqJ.mount: Deactivated successfully. Nov 13 08:28:30.813938 kubelet[2579]: I1113 08:28:30.813760 2579 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 13 08:28:30.945429 systemd[1]: Created slice kubepods-burstable-pod9dc9fb59_a8a1_4891_a1d9_e724bebadbb4.slice - libcontainer container kubepods-burstable-pod9dc9fb59_a8a1_4891_a1d9_e724bebadbb4.slice. Nov 13 08:28:30.958622 systemd[1]: Created slice kubepods-burstable-poddcaad878_fa95_43c0_a15f_ba41a26c6aac.slice - libcontainer container kubepods-burstable-poddcaad878_fa95_43c0_a15f_ba41a26c6aac.slice. Nov 13 08:28:31.040673 kubelet[2579]: I1113 08:28:31.040600 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcaad878-fa95-43c0-a15f-ba41a26c6aac-config-volume\") pod \"coredns-6f6b679f8f-7trhx\" (UID: \"dcaad878-fa95-43c0-a15f-ba41a26c6aac\") " pod="kube-system/coredns-6f6b679f8f-7trhx" Nov 13 08:28:31.040673 kubelet[2579]: I1113 08:28:31.040668 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dc9fb59-a8a1-4891-a1d9-e724bebadbb4-config-volume\") pod \"coredns-6f6b679f8f-qc5qx\" (UID: \"9dc9fb59-a8a1-4891-a1d9-e724bebadbb4\") " pod="kube-system/coredns-6f6b679f8f-qc5qx" Nov 13 08:28:31.040951 kubelet[2579]: I1113 08:28:31.040721 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd52z\" (UniqueName: \"kubernetes.io/projected/dcaad878-fa95-43c0-a15f-ba41a26c6aac-kube-api-access-zd52z\") pod \"coredns-6f6b679f8f-7trhx\" (UID: \"dcaad878-fa95-43c0-a15f-ba41a26c6aac\") " pod="kube-system/coredns-6f6b679f8f-7trhx" Nov 13 08:28:31.040951 kubelet[2579]: I1113 08:28:31.040864 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nclh\" (UniqueName: \"kubernetes.io/projected/9dc9fb59-a8a1-4891-a1d9-e724bebadbb4-kube-api-access-8nclh\") pod \"coredns-6f6b679f8f-qc5qx\" (UID: \"9dc9fb59-a8a1-4891-a1d9-e724bebadbb4\") " pod="kube-system/coredns-6f6b679f8f-qc5qx" Nov 13 08:28:31.163977 kubelet[2579]: E1113 08:28:31.163924 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:31.164486 kubelet[2579]: E1113 08:28:31.164329 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:31.252450 kubelet[2579]: E1113 08:28:31.252089 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:31.254665 containerd[1482]: time="2024-11-13T08:28:31.253987462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qc5qx,Uid:9dc9fb59-a8a1-4891-a1d9-e724bebadbb4,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:31.263608 kubelet[2579]: E1113 08:28:31.263527 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:31.266481 containerd[1482]: time="2024-11-13T08:28:31.265993837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7trhx,Uid:dcaad878-fa95-43c0-a15f-ba41a26c6aac,Namespace:kube-system,Attempt:0,}" Nov 13 08:28:32.165612 kubelet[2579]: E1113 08:28:32.165525 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:33.168869 kubelet[2579]: E1113 08:28:33.168751 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:33.466843 systemd-networkd[1383]: cilium_host: Link UP Nov 13 08:28:33.466970 systemd-networkd[1383]: cilium_net: Link UP Nov 13 08:28:33.467119 systemd-networkd[1383]: cilium_net: Gained carrier Nov 13 08:28:33.467249 systemd-networkd[1383]: cilium_host: Gained carrier Nov 13 08:28:33.479302 systemd-networkd[1383]: cilium_host: Gained IPv6LL Nov 13 08:28:33.625183 systemd-networkd[1383]: cilium_vxlan: Link UP Nov 13 08:28:33.625192 systemd-networkd[1383]: cilium_vxlan: Gained carrier Nov 13 08:28:33.634932 systemd-networkd[1383]: cilium_net: Gained IPv6LL Nov 13 08:28:34.085788 kernel: NET: Registered PF_ALG protocol family Nov 13 08:28:34.818928 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Nov 13 08:28:35.039750 systemd-networkd[1383]: lxc_health: Link UP Nov 13 08:28:35.054943 systemd-networkd[1383]: lxc_health: Gained carrier Nov 13 08:28:35.431047 systemd-networkd[1383]: lxcc781bf33019f: Link UP Nov 13 08:28:35.438163 systemd-networkd[1383]: lxc17ee10a54b8a: Link UP Nov 13 08:28:35.443938 kernel: eth0: renamed from tmpf6978 Nov 13 08:28:35.453757 kernel: eth0: renamed from tmp4b520 Nov 13 08:28:35.452922 systemd-networkd[1383]: lxcc781bf33019f: Gained carrier Nov 13 08:28:35.462055 systemd-networkd[1383]: lxc17ee10a54b8a: Gained carrier Nov 13 08:28:36.067838 kubelet[2579]: E1113 08:28:36.067786 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:36.103230 kubelet[2579]: I1113 08:28:36.103156 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jwdpp" podStartSLOduration=11.126604602 podStartE2EDuration="23.103138683s" podCreationTimestamp="2024-11-13 08:28:13 +0000 UTC" firstStartedPulling="2024-11-13 08:28:14.28445513 +0000 UTC m=+5.565980762" lastFinishedPulling="2024-11-13 08:28:26.260989195 +0000 UTC m=+17.542514843" observedRunningTime="2024-11-13 08:28:31.223972345 +0000 UTC m=+22.505497998" watchObservedRunningTime="2024-11-13 08:28:36.103138683 +0000 UTC m=+27.384664342" Nov 13 08:28:36.178097 kubelet[2579]: E1113 08:28:36.178053 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:36.226938 systemd-networkd[1383]: lxc_health: Gained IPv6LL Nov 13 08:28:36.546987 systemd-networkd[1383]: lxc17ee10a54b8a: Gained IPv6LL Nov 13 08:28:37.180819 kubelet[2579]: E1113 08:28:37.180764 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:37.187243 systemd-networkd[1383]: lxcc781bf33019f: Gained IPv6LL Nov 13 08:28:40.622528 containerd[1482]: time="2024-11-13T08:28:40.622367269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:40.626383 containerd[1482]: time="2024-11-13T08:28:40.625375891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:40.626383 containerd[1482]: time="2024-11-13T08:28:40.625435920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:40.626383 containerd[1482]: time="2024-11-13T08:28:40.625589062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:40.630062 containerd[1482]: time="2024-11-13T08:28:40.627557714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:28:40.630062 containerd[1482]: time="2024-11-13T08:28:40.627638982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:28:40.630062 containerd[1482]: time="2024-11-13T08:28:40.627656283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:40.630062 containerd[1482]: time="2024-11-13T08:28:40.627805092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:28:40.707042 systemd[1]: Started cri-containerd-4b5204cf9cdf095384f7f87c7438a1e39fb5329cdf380c966046decf390a6524.scope - libcontainer container 4b5204cf9cdf095384f7f87c7438a1e39fb5329cdf380c966046decf390a6524. Nov 13 08:28:40.715393 systemd[1]: Started cri-containerd-f69782a4d8237ad7812bb6a296fc5ae92bdbabb85685dac76f08342c8bbd8ca5.scope - libcontainer container f69782a4d8237ad7812bb6a296fc5ae92bdbabb85685dac76f08342c8bbd8ca5. Nov 13 08:28:40.783529 containerd[1482]: time="2024-11-13T08:28:40.783120795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7trhx,Uid:dcaad878-fa95-43c0-a15f-ba41a26c6aac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f69782a4d8237ad7812bb6a296fc5ae92bdbabb85685dac76f08342c8bbd8ca5\"" Nov 13 08:28:40.786223 kubelet[2579]: E1113 08:28:40.786184 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:40.792758 containerd[1482]: time="2024-11-13T08:28:40.792411683Z" level=info msg="CreateContainer within sandbox \"f69782a4d8237ad7812bb6a296fc5ae92bdbabb85685dac76f08342c8bbd8ca5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:28:40.819730 containerd[1482]: time="2024-11-13T08:28:40.818123081Z" level=info msg="CreateContainer within sandbox \"f69782a4d8237ad7812bb6a296fc5ae92bdbabb85685dac76f08342c8bbd8ca5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4d578eff8b74d464e9088c61ee27ec67fdfc53a5b2ea495e2b105434fb06646\"" Nov 13 08:28:40.820848 containerd[1482]: time="2024-11-13T08:28:40.820796915Z" level=info msg="StartContainer for \"a4d578eff8b74d464e9088c61ee27ec67fdfc53a5b2ea495e2b105434fb06646\"" Nov 13 08:28:40.867631 containerd[1482]: time="2024-11-13T08:28:40.867573098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qc5qx,Uid:9dc9fb59-a8a1-4891-a1d9-e724bebadbb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5204cf9cdf095384f7f87c7438a1e39fb5329cdf380c966046decf390a6524\"" Nov 13 08:28:40.869978 kubelet[2579]: E1113 08:28:40.869942 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:40.874877 containerd[1482]: time="2024-11-13T08:28:40.874299051Z" level=info msg="CreateContainer within sandbox \"4b5204cf9cdf095384f7f87c7438a1e39fb5329cdf380c966046decf390a6524\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 13 08:28:40.890595 systemd[1]: Started cri-containerd-a4d578eff8b74d464e9088c61ee27ec67fdfc53a5b2ea495e2b105434fb06646.scope - libcontainer container a4d578eff8b74d464e9088c61ee27ec67fdfc53a5b2ea495e2b105434fb06646. Nov 13 08:28:40.903174 containerd[1482]: time="2024-11-13T08:28:40.903123394Z" level=info msg="CreateContainer within sandbox \"4b5204cf9cdf095384f7f87c7438a1e39fb5329cdf380c966046decf390a6524\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca32ff05a5425d891fbcc14dd8ba1cb47ca54044b7de70e6b109de77f6a46c45\"" Nov 13 08:28:40.904942 containerd[1482]: time="2024-11-13T08:28:40.904901698Z" level=info msg="StartContainer for \"ca32ff05a5425d891fbcc14dd8ba1cb47ca54044b7de70e6b109de77f6a46c45\"" Nov 13 08:28:40.958241 containerd[1482]: time="2024-11-13T08:28:40.958145017Z" level=info msg="StartContainer for \"a4d578eff8b74d464e9088c61ee27ec67fdfc53a5b2ea495e2b105434fb06646\" returns successfully" Nov 13 08:28:40.971024 systemd[1]: Started cri-containerd-ca32ff05a5425d891fbcc14dd8ba1cb47ca54044b7de70e6b109de77f6a46c45.scope - libcontainer container ca32ff05a5425d891fbcc14dd8ba1cb47ca54044b7de70e6b109de77f6a46c45. Nov 13 08:28:41.019772 containerd[1482]: time="2024-11-13T08:28:41.019696993Z" level=info msg="StartContainer for \"ca32ff05a5425d891fbcc14dd8ba1cb47ca54044b7de70e6b109de77f6a46c45\" returns successfully" Nov 13 08:28:41.201460 kubelet[2579]: E1113 08:28:41.199838 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:41.208979 kubelet[2579]: E1113 08:28:41.208946 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:41.230384 kubelet[2579]: I1113 08:28:41.230095 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7trhx" podStartSLOduration=28.230075565 podStartE2EDuration="28.230075565s" podCreationTimestamp="2024-11-13 08:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:41.227696081 +0000 UTC m=+32.509221769" watchObservedRunningTime="2024-11-13 08:28:41.230075565 +0000 UTC m=+32.511601223" Nov 13 08:28:41.249987 kubelet[2579]: I1113 08:28:41.249452 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qc5qx" podStartSLOduration=28.248880701 podStartE2EDuration="28.248880701s" podCreationTimestamp="2024-11-13 08:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:28:41.247251202 +0000 UTC m=+32.528776867" watchObservedRunningTime="2024-11-13 08:28:41.248880701 +0000 UTC m=+32.530406364" Nov 13 08:28:41.633515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288840133.mount: Deactivated successfully. Nov 13 08:28:42.211030 kubelet[2579]: E1113 08:28:42.210818 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:42.211030 kubelet[2579]: E1113 08:28:42.210950 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:43.212985 kubelet[2579]: E1113 08:28:43.212946 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:28:57.024214 systemd[1]: Started sshd@9-64.23.149.40:22-139.178.89.65:48156.service - OpenSSH per-connection server daemon (139.178.89.65:48156). Nov 13 08:28:57.141863 sshd[3971]: Accepted publickey for core from 139.178.89.65 port 48156 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:28:57.143855 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:28:57.151794 systemd-logind[1457]: New session 10 of user core. Nov 13 08:28:57.160019 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 13 08:28:57.823139 sshd[3973]: Connection closed by 139.178.89.65 port 48156 Nov 13 08:28:57.824599 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Nov 13 08:28:57.837574 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Nov 13 08:28:57.837891 systemd[1]: sshd@9-64.23.149.40:22-139.178.89.65:48156.service: Deactivated successfully. Nov 13 08:28:57.839975 systemd[1]: session-10.scope: Deactivated successfully. Nov 13 08:28:57.842423 systemd-logind[1457]: Removed session 10. Nov 13 08:29:02.847263 systemd[1]: Started sshd@10-64.23.149.40:22-139.178.89.65:48168.service - OpenSSH per-connection server daemon (139.178.89.65:48168). Nov 13 08:29:02.956402 sshd[3985]: Accepted publickey for core from 139.178.89.65 port 48168 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:02.957359 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:02.975826 systemd-logind[1457]: New session 11 of user core. Nov 13 08:29:02.983083 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 13 08:29:03.167754 sshd[3987]: Connection closed by 139.178.89.65 port 48168 Nov 13 08:29:03.169161 sshd-session[3985]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:03.179155 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Nov 13 08:29:03.179782 systemd[1]: sshd@10-64.23.149.40:22-139.178.89.65:48168.service: Deactivated successfully. Nov 13 08:29:03.182953 systemd[1]: session-11.scope: Deactivated successfully. Nov 13 08:29:03.185231 systemd-logind[1457]: Removed session 11. Nov 13 08:29:08.197214 systemd[1]: Started sshd@11-64.23.149.40:22-139.178.89.65:51074.service - OpenSSH per-connection server daemon (139.178.89.65:51074). Nov 13 08:29:08.287249 sshd[3999]: Accepted publickey for core from 139.178.89.65 port 51074 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:08.290633 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:08.299190 systemd-logind[1457]: New session 12 of user core. Nov 13 08:29:08.305134 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 13 08:29:08.469115 sshd[4001]: Connection closed by 139.178.89.65 port 51074 Nov 13 08:29:08.468971 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:08.475028 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Nov 13 08:29:08.475563 systemd[1]: sshd@11-64.23.149.40:22-139.178.89.65:51074.service: Deactivated successfully. Nov 13 08:29:08.480279 systemd[1]: session-12.scope: Deactivated successfully. Nov 13 08:29:08.482325 systemd-logind[1457]: Removed session 12. Nov 13 08:29:13.488209 systemd[1]: Started sshd@12-64.23.149.40:22-139.178.89.65:51084.service - OpenSSH per-connection server daemon (139.178.89.65:51084). Nov 13 08:29:13.550758 sshd[4015]: Accepted publickey for core from 139.178.89.65 port 51084 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:13.551873 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:13.558477 systemd-logind[1457]: New session 13 of user core. Nov 13 08:29:13.562957 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 13 08:29:13.711194 sshd[4017]: Connection closed by 139.178.89.65 port 51084 Nov 13 08:29:13.711921 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:13.716336 systemd[1]: sshd@12-64.23.149.40:22-139.178.89.65:51084.service: Deactivated successfully. Nov 13 08:29:13.719519 systemd[1]: session-13.scope: Deactivated successfully. Nov 13 08:29:13.723212 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Nov 13 08:29:13.726684 systemd-logind[1457]: Removed session 13. Nov 13 08:29:18.732179 systemd[1]: Started sshd@13-64.23.149.40:22-139.178.89.65:34048.service - OpenSSH per-connection server daemon (139.178.89.65:34048). Nov 13 08:29:18.798018 sshd[4031]: Accepted publickey for core from 139.178.89.65 port 34048 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:18.802440 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:18.809165 systemd-logind[1457]: New session 14 of user core. Nov 13 08:29:18.822495 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 13 08:29:18.988864 sshd[4033]: Connection closed by 139.178.89.65 port 34048 Nov 13 08:29:18.990314 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:19.008510 systemd[1]: sshd@13-64.23.149.40:22-139.178.89.65:34048.service: Deactivated successfully. Nov 13 08:29:19.014523 systemd[1]: session-14.scope: Deactivated successfully. Nov 13 08:29:19.018396 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Nov 13 08:29:19.031235 systemd[1]: Started sshd@14-64.23.149.40:22-139.178.89.65:34058.service - OpenSSH per-connection server daemon (139.178.89.65:34058). Nov 13 08:29:19.035661 systemd-logind[1457]: Removed session 14. Nov 13 08:29:19.105385 sshd[4045]: Accepted publickey for core from 139.178.89.65 port 34058 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:19.108213 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:19.131435 systemd-logind[1457]: New session 15 of user core. Nov 13 08:29:19.133991 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 13 08:29:19.415735 sshd[4047]: Connection closed by 139.178.89.65 port 34058 Nov 13 08:29:19.415479 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:19.427330 systemd[1]: sshd@14-64.23.149.40:22-139.178.89.65:34058.service: Deactivated successfully. Nov 13 08:29:19.432585 systemd[1]: session-15.scope: Deactivated successfully. Nov 13 08:29:19.437103 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Nov 13 08:29:19.448022 systemd[1]: Started sshd@15-64.23.149.40:22-139.178.89.65:34064.service - OpenSSH per-connection server daemon (139.178.89.65:34064). Nov 13 08:29:19.454806 systemd-logind[1457]: Removed session 15. Nov 13 08:29:19.553338 sshd[4056]: Accepted publickey for core from 139.178.89.65 port 34064 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:19.555615 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:19.562498 systemd-logind[1457]: New session 16 of user core. Nov 13 08:29:19.573188 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 13 08:29:19.751417 sshd[4058]: Connection closed by 139.178.89.65 port 34064 Nov 13 08:29:19.752297 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:19.759062 systemd[1]: sshd@15-64.23.149.40:22-139.178.89.65:34064.service: Deactivated successfully. Nov 13 08:29:19.762262 systemd[1]: session-16.scope: Deactivated successfully. Nov 13 08:29:19.764007 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Nov 13 08:29:19.766262 systemd-logind[1457]: Removed session 16. Nov 13 08:29:19.978415 kubelet[2579]: E1113 08:29:19.978363 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:24.766873 systemd[1]: Started sshd@16-64.23.149.40:22-139.178.89.65:34066.service - OpenSSH per-connection server daemon (139.178.89.65:34066). Nov 13 08:29:24.837139 sshd[4070]: Accepted publickey for core from 139.178.89.65 port 34066 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:24.838141 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:24.844015 systemd-logind[1457]: New session 17 of user core. Nov 13 08:29:24.856031 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 13 08:29:25.000325 sshd[4072]: Connection closed by 139.178.89.65 port 34066 Nov 13 08:29:25.001468 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:25.006477 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Nov 13 08:29:25.007000 systemd[1]: sshd@16-64.23.149.40:22-139.178.89.65:34066.service: Deactivated successfully. Nov 13 08:29:25.009757 systemd[1]: session-17.scope: Deactivated successfully. Nov 13 08:29:25.012074 systemd-logind[1457]: Removed session 17. Nov 13 08:29:30.022110 systemd[1]: Started sshd@17-64.23.149.40:22-139.178.89.65:56912.service - OpenSSH per-connection server daemon (139.178.89.65:56912). Nov 13 08:29:30.073974 sshd[4083]: Accepted publickey for core from 139.178.89.65 port 56912 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:30.075915 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:30.081915 systemd-logind[1457]: New session 18 of user core. Nov 13 08:29:30.089050 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 13 08:29:30.239328 sshd[4085]: Connection closed by 139.178.89.65 port 56912 Nov 13 08:29:30.240113 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:30.250207 systemd[1]: sshd@17-64.23.149.40:22-139.178.89.65:56912.service: Deactivated successfully. Nov 13 08:29:30.253080 systemd[1]: session-18.scope: Deactivated successfully. Nov 13 08:29:30.255483 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Nov 13 08:29:30.264256 systemd[1]: Started sshd@18-64.23.149.40:22-139.178.89.65:56922.service - OpenSSH per-connection server daemon (139.178.89.65:56922). Nov 13 08:29:30.267222 systemd-logind[1457]: Removed session 18. Nov 13 08:29:30.324820 sshd[4096]: Accepted publickey for core from 139.178.89.65 port 56922 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:30.325772 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:30.332421 systemd-logind[1457]: New session 19 of user core. Nov 13 08:29:30.339124 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 13 08:29:30.636470 sshd[4098]: Connection closed by 139.178.89.65 port 56922 Nov 13 08:29:30.637917 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:30.647766 systemd[1]: sshd@18-64.23.149.40:22-139.178.89.65:56922.service: Deactivated successfully. Nov 13 08:29:30.650480 systemd[1]: session-19.scope: Deactivated successfully. Nov 13 08:29:30.653477 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Nov 13 08:29:30.658187 systemd[1]: Started sshd@19-64.23.149.40:22-139.178.89.65:56932.service - OpenSSH per-connection server daemon (139.178.89.65:56932). Nov 13 08:29:30.661142 systemd-logind[1457]: Removed session 19. Nov 13 08:29:30.725056 sshd[4106]: Accepted publickey for core from 139.178.89.65 port 56932 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:30.726234 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:30.732919 systemd-logind[1457]: New session 20 of user core. Nov 13 08:29:30.742057 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 13 08:29:32.625829 sshd[4108]: Connection closed by 139.178.89.65 port 56932 Nov 13 08:29:32.625432 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:32.638305 systemd[1]: sshd@19-64.23.149.40:22-139.178.89.65:56932.service: Deactivated successfully. Nov 13 08:29:32.642902 systemd[1]: session-20.scope: Deactivated successfully. Nov 13 08:29:32.646228 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Nov 13 08:29:32.655139 systemd[1]: Started sshd@20-64.23.149.40:22-139.178.89.65:56936.service - OpenSSH per-connection server daemon (139.178.89.65:56936). Nov 13 08:29:32.659696 systemd-logind[1457]: Removed session 20. Nov 13 08:29:32.741765 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 56936 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:32.743814 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:32.750830 systemd-logind[1457]: New session 21 of user core. Nov 13 08:29:32.757123 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 13 08:29:32.980878 kubelet[2579]: E1113 08:29:32.978414 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:33.094405 sshd[4127]: Connection closed by 139.178.89.65 port 56936 Nov 13 08:29:33.095118 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:33.107352 systemd[1]: sshd@20-64.23.149.40:22-139.178.89.65:56936.service: Deactivated successfully. Nov 13 08:29:33.111017 systemd[1]: session-21.scope: Deactivated successfully. Nov 13 08:29:33.113327 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Nov 13 08:29:33.118177 systemd[1]: Started sshd@21-64.23.149.40:22-139.178.89.65:56946.service - OpenSSH per-connection server daemon (139.178.89.65:56946). Nov 13 08:29:33.120438 systemd-logind[1457]: Removed session 21. Nov 13 08:29:33.181762 sshd[4136]: Accepted publickey for core from 139.178.89.65 port 56946 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:33.183224 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:33.189009 systemd-logind[1457]: New session 22 of user core. Nov 13 08:29:33.202993 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 13 08:29:33.354900 sshd[4138]: Connection closed by 139.178.89.65 port 56946 Nov 13 08:29:33.355912 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:33.361245 systemd[1]: sshd@21-64.23.149.40:22-139.178.89.65:56946.service: Deactivated successfully. Nov 13 08:29:33.364926 systemd[1]: session-22.scope: Deactivated successfully. Nov 13 08:29:33.366912 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Nov 13 08:29:33.368463 systemd-logind[1457]: Removed session 22. Nov 13 08:29:35.978566 kubelet[2579]: E1113 08:29:35.978503 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:38.378291 systemd[1]: Started sshd@22-64.23.149.40:22-139.178.89.65:59330.service - OpenSSH per-connection server daemon (139.178.89.65:59330). Nov 13 08:29:38.433484 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 59330 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:38.435991 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:38.442830 systemd-logind[1457]: New session 23 of user core. Nov 13 08:29:38.446977 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 13 08:29:38.609445 sshd[4151]: Connection closed by 139.178.89.65 port 59330 Nov 13 08:29:38.609218 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:38.613912 systemd[1]: sshd@22-64.23.149.40:22-139.178.89.65:59330.service: Deactivated successfully. Nov 13 08:29:38.616802 systemd[1]: session-23.scope: Deactivated successfully. Nov 13 08:29:38.619536 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Nov 13 08:29:38.620786 systemd-logind[1457]: Removed session 23. Nov 13 08:29:42.979228 kubelet[2579]: E1113 08:29:42.978306 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:43.637402 systemd[1]: Started sshd@23-64.23.149.40:22-139.178.89.65:59340.service - OpenSSH per-connection server daemon (139.178.89.65:59340). Nov 13 08:29:43.697616 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 59340 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:43.699523 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:43.707083 systemd-logind[1457]: New session 24 of user core. Nov 13 08:29:43.712031 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 13 08:29:43.869636 sshd[4166]: Connection closed by 139.178.89.65 port 59340 Nov 13 08:29:43.870586 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:43.875870 systemd[1]: sshd@23-64.23.149.40:22-139.178.89.65:59340.service: Deactivated successfully. Nov 13 08:29:43.878256 systemd[1]: session-24.scope: Deactivated successfully. Nov 13 08:29:43.879168 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Nov 13 08:29:43.881049 systemd-logind[1457]: Removed session 24. Nov 13 08:29:48.888242 systemd[1]: Started sshd@24-64.23.149.40:22-139.178.89.65:50256.service - OpenSSH per-connection server daemon (139.178.89.65:50256). Nov 13 08:29:48.940607 sshd[4179]: Accepted publickey for core from 139.178.89.65 port 50256 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:48.943465 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:48.950227 systemd-logind[1457]: New session 25 of user core. Nov 13 08:29:48.960049 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 13 08:29:48.978792 kubelet[2579]: E1113 08:29:48.978484 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:49.140826 sshd[4181]: Connection closed by 139.178.89.65 port 50256 Nov 13 08:29:49.142170 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:49.149346 systemd[1]: sshd@24-64.23.149.40:22-139.178.89.65:50256.service: Deactivated successfully. Nov 13 08:29:49.149403 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Nov 13 08:29:49.152018 systemd[1]: session-25.scope: Deactivated successfully. Nov 13 08:29:49.153430 systemd-logind[1457]: Removed session 25. Nov 13 08:29:52.980524 kubelet[2579]: E1113 08:29:52.978757 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:29:54.159610 systemd[1]: Started sshd@25-64.23.149.40:22-139.178.89.65:50258.service - OpenSSH per-connection server daemon (139.178.89.65:50258). Nov 13 08:29:54.259205 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 50258 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:54.261342 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:54.267562 systemd-logind[1457]: New session 26 of user core. Nov 13 08:29:54.273054 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 13 08:29:54.428362 sshd[4195]: Connection closed by 139.178.89.65 port 50258 Nov 13 08:29:54.430836 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:54.441358 systemd[1]: sshd@25-64.23.149.40:22-139.178.89.65:50258.service: Deactivated successfully. Nov 13 08:29:54.444696 systemd[1]: session-26.scope: Deactivated successfully. Nov 13 08:29:54.446201 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Nov 13 08:29:54.459254 systemd[1]: Started sshd@26-64.23.149.40:22-139.178.89.65:50272.service - OpenSSH per-connection server daemon (139.178.89.65:50272). Nov 13 08:29:54.462225 systemd-logind[1457]: Removed session 26. Nov 13 08:29:54.510822 sshd[4206]: Accepted publickey for core from 139.178.89.65 port 50272 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:54.511679 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:54.519021 systemd-logind[1457]: New session 27 of user core. Nov 13 08:29:54.530073 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 13 08:29:55.973215 containerd[1482]: time="2024-11-13T08:29:55.973155702Z" level=info msg="StopContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" with timeout 30 (s)" Nov 13 08:29:55.977214 containerd[1482]: time="2024-11-13T08:29:55.977041977Z" level=info msg="Stop container \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" with signal terminated" Nov 13 08:29:56.009578 systemd[1]: run-containerd-runc-k8s.io-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef-runc.u8lIX9.mount: Deactivated successfully. Nov 13 08:29:56.011667 systemd[1]: cri-containerd-b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798.scope: Deactivated successfully. Nov 13 08:29:56.040034 containerd[1482]: time="2024-11-13T08:29:56.039595608Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 13 08:29:56.048439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798-rootfs.mount: Deactivated successfully. Nov 13 08:29:56.057169 containerd[1482]: time="2024-11-13T08:29:56.056796915Z" level=info msg="StopContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" with timeout 2 (s)" Nov 13 08:29:56.057895 containerd[1482]: time="2024-11-13T08:29:56.057659972Z" level=info msg="Stop container \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" with signal terminated" Nov 13 08:29:56.064876 containerd[1482]: time="2024-11-13T08:29:56.064555009Z" level=info msg="shim disconnected" id=b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798 namespace=k8s.io Nov 13 08:29:56.064876 containerd[1482]: time="2024-11-13T08:29:56.064624797Z" level=warning msg="cleaning up after shim disconnected" id=b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798 namespace=k8s.io Nov 13 08:29:56.064876 containerd[1482]: time="2024-11-13T08:29:56.064634285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:56.069448 systemd-networkd[1383]: lxc_health: Link DOWN Nov 13 08:29:56.069459 systemd-networkd[1383]: lxc_health: Lost carrier Nov 13 08:29:56.109463 systemd[1]: cri-containerd-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef.scope: Deactivated successfully. Nov 13 08:29:56.110120 systemd[1]: cri-containerd-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef.scope: Consumed 9.146s CPU time. Nov 13 08:29:56.114482 containerd[1482]: time="2024-11-13T08:29:56.114385404Z" level=info msg="StopContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" returns successfully" Nov 13 08:29:56.117036 containerd[1482]: time="2024-11-13T08:29:56.116893131Z" level=info msg="StopPodSandbox for \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\"" Nov 13 08:29:56.124751 containerd[1482]: time="2024-11-13T08:29:56.119256401Z" level=info msg="Container to stop \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.127359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f-shm.mount: Deactivated successfully. Nov 13 08:29:56.148308 systemd[1]: cri-containerd-226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f.scope: Deactivated successfully. Nov 13 08:29:56.160099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef-rootfs.mount: Deactivated successfully. Nov 13 08:29:56.171399 containerd[1482]: time="2024-11-13T08:29:56.171298017Z" level=info msg="shim disconnected" id=0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef namespace=k8s.io Nov 13 08:29:56.171811 containerd[1482]: time="2024-11-13T08:29:56.171781158Z" level=warning msg="cleaning up after shim disconnected" id=0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef namespace=k8s.io Nov 13 08:29:56.171907 containerd[1482]: time="2024-11-13T08:29:56.171892282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:56.185883 containerd[1482]: time="2024-11-13T08:29:56.185799530Z" level=info msg="shim disconnected" id=226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f namespace=k8s.io Nov 13 08:29:56.185883 containerd[1482]: time="2024-11-13T08:29:56.185864653Z" level=warning msg="cleaning up after shim disconnected" id=226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f namespace=k8s.io Nov 13 08:29:56.185883 containerd[1482]: time="2024-11-13T08:29:56.185873463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:56.202106 containerd[1482]: time="2024-11-13T08:29:56.201956157Z" level=info msg="StopContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" returns successfully" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203310674Z" level=info msg="StopPodSandbox for \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\"" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203414638Z" level=info msg="Container to stop \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203464403Z" level=info msg="Container to stop \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203483537Z" level=info msg="Container to stop \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203503537Z" level=info msg="Container to stop \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.203604 containerd[1482]: time="2024-11-13T08:29:56.203514455Z" level=info msg="Container to stop \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 13 08:29:56.220859 containerd[1482]: time="2024-11-13T08:29:56.220797434Z" level=info msg="TearDown network for sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" successfully" Nov 13 08:29:56.221165 containerd[1482]: time="2024-11-13T08:29:56.221017887Z" level=info msg="StopPodSandbox for \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" returns successfully" Nov 13 08:29:56.235584 systemd[1]: cri-containerd-3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd.scope: Deactivated successfully. Nov 13 08:29:56.273785 containerd[1482]: time="2024-11-13T08:29:56.273464034Z" level=info msg="shim disconnected" id=3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd namespace=k8s.io Nov 13 08:29:56.273785 containerd[1482]: time="2024-11-13T08:29:56.273533451Z" level=warning msg="cleaning up after shim disconnected" id=3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd namespace=k8s.io Nov 13 08:29:56.273785 containerd[1482]: time="2024-11-13T08:29:56.273546688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:29:56.294786 kubelet[2579]: I1113 08:29:56.293943 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgl2n\" (UniqueName: \"kubernetes.io/projected/ddbeb871-0476-4f5b-b1f2-18715a726bb9-kube-api-access-wgl2n\") pod \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\" (UID: \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\") " Nov 13 08:29:56.294786 kubelet[2579]: I1113 08:29:56.294016 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbeb871-0476-4f5b-b1f2-18715a726bb9-cilium-config-path\") pod \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\" (UID: \"ddbeb871-0476-4f5b-b1f2-18715a726bb9\") " Nov 13 08:29:56.298038 kubelet[2579]: I1113 08:29:56.297650 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbeb871-0476-4f5b-b1f2-18715a726bb9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddbeb871-0476-4f5b-b1f2-18715a726bb9" (UID: "ddbeb871-0476-4f5b-b1f2-18715a726bb9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:29:56.298695 containerd[1482]: time="2024-11-13T08:29:56.298634822Z" level=info msg="TearDown network for sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" successfully" Nov 13 08:29:56.298695 containerd[1482]: time="2024-11-13T08:29:56.298673065Z" level=info msg="StopPodSandbox for \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" returns successfully" Nov 13 08:29:56.302538 kubelet[2579]: I1113 08:29:56.302483 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbeb871-0476-4f5b-b1f2-18715a726bb9-kube-api-access-wgl2n" (OuterVolumeSpecName: "kube-api-access-wgl2n") pod "ddbeb871-0476-4f5b-b1f2-18715a726bb9" (UID: "ddbeb871-0476-4f5b-b1f2-18715a726bb9"). InnerVolumeSpecName "kube-api-access-wgl2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:56.394829 kubelet[2579]: I1113 08:29:56.394765 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-hubble-tls\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395209 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/679deea7-1e31-4199-9f95-aecaa1339cc0-clustermesh-secrets\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395415 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-hostproc\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395455 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-run\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395506 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-kernel\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395542 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-net\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.395663 kubelet[2579]: I1113 08:29:56.395592 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-lib-modules\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396061 kubelet[2579]: I1113 08:29:56.395628 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b68jz\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-kube-api-access-b68jz\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396186 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-config-path\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396246 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-cgroup\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396278 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-etc-cni-netd\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396327 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-xtables-lock\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396358 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cni-path\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.396460 kubelet[2579]: I1113 08:29:56.396411 2579 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-bpf-maps\") pod \"679deea7-1e31-4199-9f95-aecaa1339cc0\" (UID: \"679deea7-1e31-4199-9f95-aecaa1339cc0\") " Nov 13 08:29:56.397160 kubelet[2579]: I1113 08:29:56.396890 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wgl2n\" (UniqueName: \"kubernetes.io/projected/ddbeb871-0476-4f5b-b1f2-18715a726bb9-kube-api-access-wgl2n\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.397160 kubelet[2579]: I1113 08:29:56.396920 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddbeb871-0476-4f5b-b1f2-18715a726bb9-cilium-config-path\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.397160 kubelet[2579]: I1113 08:29:56.396991 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.400784 kubelet[2579]: I1113 08:29:56.399949 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.400784 kubelet[2579]: I1113 08:29:56.400007 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.400784 kubelet[2579]: I1113 08:29:56.400026 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.400784 kubelet[2579]: I1113 08:29:56.400045 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.400784 kubelet[2579]: I1113 08:29:56.400062 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.401157 kubelet[2579]: I1113 08:29:56.400174 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:56.401157 kubelet[2579]: I1113 08:29:56.400205 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.401157 kubelet[2579]: I1113 08:29:56.400227 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.401157 kubelet[2579]: I1113 08:29:56.400242 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.401157 kubelet[2579]: I1113 08:29:56.400264 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 13 08:29:56.401912 kubelet[2579]: I1113 08:29:56.401865 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-kube-api-access-b68jz" (OuterVolumeSpecName: "kube-api-access-b68jz") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "kube-api-access-b68jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 13 08:29:56.406962 kubelet[2579]: I1113 08:29:56.406898 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679deea7-1e31-4199-9f95-aecaa1339cc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 13 08:29:56.407853 kubelet[2579]: I1113 08:29:56.407814 2579 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "679deea7-1e31-4199-9f95-aecaa1339cc0" (UID: "679deea7-1e31-4199-9f95-aecaa1339cc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 13 08:29:56.432641 kubelet[2579]: I1113 08:29:56.432374 2579 scope.go:117] "RemoveContainer" containerID="0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef" Nov 13 08:29:56.444605 systemd[1]: Removed slice kubepods-burstable-pod679deea7_1e31_4199_9f95_aecaa1339cc0.slice - libcontainer container kubepods-burstable-pod679deea7_1e31_4199_9f95_aecaa1339cc0.slice. Nov 13 08:29:56.444715 systemd[1]: kubepods-burstable-pod679deea7_1e31_4199_9f95_aecaa1339cc0.slice: Consumed 9.268s CPU time. Nov 13 08:29:56.449741 containerd[1482]: time="2024-11-13T08:29:56.448493976Z" level=info msg="RemoveContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\"" Nov 13 08:29:56.453962 systemd[1]: Removed slice kubepods-besteffort-podddbeb871_0476_4f5b_b1f2_18715a726bb9.slice - libcontainer container kubepods-besteffort-podddbeb871_0476_4f5b_b1f2_18715a726bb9.slice. Nov 13 08:29:56.455810 containerd[1482]: time="2024-11-13T08:29:56.455370487Z" level=info msg="RemoveContainer for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" returns successfully" Nov 13 08:29:56.456816 kubelet[2579]: I1113 08:29:56.456764 2579 scope.go:117] "RemoveContainer" containerID="d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669" Nov 13 08:29:56.459964 containerd[1482]: time="2024-11-13T08:29:56.459846600Z" level=info msg="RemoveContainer for \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\"" Nov 13 08:29:56.466767 containerd[1482]: time="2024-11-13T08:29:56.465379634Z" level=info msg="RemoveContainer for \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\" returns successfully" Nov 13 08:29:56.467213 kubelet[2579]: I1113 08:29:56.467182 2579 scope.go:117] "RemoveContainer" containerID="b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da" Nov 13 08:29:56.468885 containerd[1482]: time="2024-11-13T08:29:56.468843926Z" level=info msg="RemoveContainer for \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\"" Nov 13 08:29:56.478273 containerd[1482]: time="2024-11-13T08:29:56.477674099Z" level=info msg="RemoveContainer for \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\" returns successfully" Nov 13 08:29:56.478569 kubelet[2579]: I1113 08:29:56.478537 2579 scope.go:117] "RemoveContainer" containerID="8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97" Nov 13 08:29:56.483532 containerd[1482]: time="2024-11-13T08:29:56.482542556Z" level=info msg="RemoveContainer for \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497385 2579 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-hubble-tls\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497430 2579 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/679deea7-1e31-4199-9f95-aecaa1339cc0-clustermesh-secrets\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497443 2579 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-hostproc\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497453 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-run\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497470 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-kernel\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497483 2579 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-host-proc-sys-net\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497497 2579 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-lib-modules\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.497743 kubelet[2579]: I1113 08:29:56.497511 2579 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b68jz\" (UniqueName: \"kubernetes.io/projected/679deea7-1e31-4199-9f95-aecaa1339cc0-kube-api-access-b68jz\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497525 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-config-path\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497534 2579 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cilium-cgroup\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497543 2579 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-etc-cni-netd\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497550 2579 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-xtables-lock\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497558 2579 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-bpf-maps\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.502185 kubelet[2579]: I1113 08:29:56.497566 2579 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/679deea7-1e31-4199-9f95-aecaa1339cc0-cni-path\") on node \"ci-4152.0.0-f-d2466dff01\" DevicePath \"\"" Nov 13 08:29:56.504794 containerd[1482]: time="2024-11-13T08:29:56.504737300Z" level=info msg="RemoveContainer for \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\" returns successfully" Nov 13 08:29:56.506485 kubelet[2579]: I1113 08:29:56.506448 2579 scope.go:117] "RemoveContainer" containerID="46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e" Nov 13 08:29:56.508019 containerd[1482]: time="2024-11-13T08:29:56.507970951Z" level=info msg="RemoveContainer for \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\"" Nov 13 08:29:56.510786 containerd[1482]: time="2024-11-13T08:29:56.510747105Z" level=info msg="RemoveContainer for \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\" returns successfully" Nov 13 08:29:56.511372 kubelet[2579]: I1113 08:29:56.510973 2579 scope.go:117] "RemoveContainer" containerID="0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef" Nov 13 08:29:56.511456 containerd[1482]: time="2024-11-13T08:29:56.511177992Z" level=error msg="ContainerStatus for \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\": not found" Nov 13 08:29:56.511598 kubelet[2579]: E1113 08:29:56.511567 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\": not found" containerID="0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef" Nov 13 08:29:56.512602 kubelet[2579]: I1113 08:29:56.511622 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef"} err="failed to get container status \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"0054f3b122e5aee4c9722151753d918afbccc545250e408088753506b7be23ef\": not found" Nov 13 08:29:56.512753 kubelet[2579]: I1113 08:29:56.512610 2579 scope.go:117] "RemoveContainer" containerID="d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669" Nov 13 08:29:56.513017 containerd[1482]: time="2024-11-13T08:29:56.512938458Z" level=error msg="ContainerStatus for \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\": not found" Nov 13 08:29:56.514210 kubelet[2579]: E1113 08:29:56.514183 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\": not found" containerID="d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669" Nov 13 08:29:56.515214 kubelet[2579]: I1113 08:29:56.514418 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669"} err="failed to get container status \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\": rpc error: code = NotFound desc = an error occurred when try to find container \"d968fbe480c613fbb86f150b19288e0b571cb290362313a51b72bc8fb3340669\": not found" Nov 13 08:29:56.515214 kubelet[2579]: I1113 08:29:56.514477 2579 scope.go:117] "RemoveContainer" containerID="b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da" Nov 13 08:29:56.515214 kubelet[2579]: E1113 08:29:56.515145 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\": not found" containerID="b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da" Nov 13 08:29:56.515214 kubelet[2579]: I1113 08:29:56.515168 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da"} err="failed to get container status \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\": rpc error: code = NotFound desc = an error occurred when try to find container \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\": not found" Nov 13 08:29:56.515635 containerd[1482]: time="2024-11-13T08:29:56.514953091Z" level=error msg="ContainerStatus for \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b014539c01c66d37c1bc6d812620050e524a3637d753bc5b5ab1fa9bd724a8da\": not found" Nov 13 08:29:56.515672 kubelet[2579]: I1113 08:29:56.515187 2579 scope.go:117] "RemoveContainer" containerID="8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97" Nov 13 08:29:56.515983 containerd[1482]: time="2024-11-13T08:29:56.515921919Z" level=error msg="ContainerStatus for \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\": not found" Nov 13 08:29:56.517145 kubelet[2579]: E1113 08:29:56.516910 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\": not found" containerID="8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97" Nov 13 08:29:56.517145 kubelet[2579]: I1113 08:29:56.516947 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97"} err="failed to get container status \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bbb132097b80658ef4379b9dd40503cc387038bf015f655f02f7fdce56e9f97\": not found" Nov 13 08:29:56.517145 kubelet[2579]: I1113 08:29:56.517002 2579 scope.go:117] "RemoveContainer" containerID="46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e" Nov 13 08:29:56.517527 containerd[1482]: time="2024-11-13T08:29:56.517419896Z" level=error msg="ContainerStatus for \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\": not found" Nov 13 08:29:56.517846 kubelet[2579]: E1113 08:29:56.517697 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\": not found" containerID="46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e" Nov 13 08:29:56.517846 kubelet[2579]: I1113 08:29:56.517733 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e"} err="failed to get container status \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\": rpc error: code = NotFound desc = an error occurred when try to find container \"46e26d57661ddd82c42fb6f2ec5ac5c2161cfe5d1263bd7016cbd8e2b1ba464e\": not found" Nov 13 08:29:56.517846 kubelet[2579]: I1113 08:29:56.517772 2579 scope.go:117] "RemoveContainer" containerID="b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798" Nov 13 08:29:56.519008 containerd[1482]: time="2024-11-13T08:29:56.518940162Z" level=info msg="RemoveContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\"" Nov 13 08:29:56.522434 containerd[1482]: time="2024-11-13T08:29:56.522359164Z" level=info msg="RemoveContainer for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" returns successfully" Nov 13 08:29:56.522890 kubelet[2579]: I1113 08:29:56.522666 2579 scope.go:117] "RemoveContainer" containerID="b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798" Nov 13 08:29:56.522965 containerd[1482]: time="2024-11-13T08:29:56.522943263Z" level=error msg="ContainerStatus for \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\": not found" Nov 13 08:29:56.523168 kubelet[2579]: E1113 08:29:56.523136 2579 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\": not found" containerID="b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798" Nov 13 08:29:56.523311 kubelet[2579]: I1113 08:29:56.523280 2579 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798"} err="failed to get container status \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3fee1d219e1106f1f122da4aed788c8e65c010a77a65ce0fa4191a2b5870798\": not found" Nov 13 08:29:56.981744 kubelet[2579]: I1113 08:29:56.981207 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" path="/var/lib/kubelet/pods/679deea7-1e31-4199-9f95-aecaa1339cc0/volumes" Nov 13 08:29:56.982517 kubelet[2579]: I1113 08:29:56.982486 2579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbeb871-0476-4f5b-b1f2-18715a726bb9" path="/var/lib/kubelet/pods/ddbeb871-0476-4f5b-b1f2-18715a726bb9/volumes" Nov 13 08:29:56.999877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f-rootfs.mount: Deactivated successfully. Nov 13 08:29:57.000055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd-rootfs.mount: Deactivated successfully. Nov 13 08:29:57.000158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd-shm.mount: Deactivated successfully. Nov 13 08:29:57.000293 systemd[1]: var-lib-kubelet-pods-ddbeb871\x2d0476\x2d4f5b\x2db1f2\x2d18715a726bb9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwgl2n.mount: Deactivated successfully. Nov 13 08:29:57.000403 systemd[1]: var-lib-kubelet-pods-679deea7\x2d1e31\x2d4199\x2d9f95\x2daecaa1339cc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db68jz.mount: Deactivated successfully. Nov 13 08:29:57.000503 systemd[1]: var-lib-kubelet-pods-679deea7\x2d1e31\x2d4199\x2d9f95\x2daecaa1339cc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 13 08:29:57.000607 systemd[1]: var-lib-kubelet-pods-679deea7\x2d1e31\x2d4199\x2d9f95\x2daecaa1339cc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 13 08:29:57.890045 sshd[4208]: Connection closed by 139.178.89.65 port 50272 Nov 13 08:29:57.891459 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:57.903514 systemd[1]: sshd@26-64.23.149.40:22-139.178.89.65:50272.service: Deactivated successfully. Nov 13 08:29:57.907764 systemd[1]: session-27.scope: Deactivated successfully. Nov 13 08:29:57.909488 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. Nov 13 08:29:57.916039 systemd[1]: Started sshd@27-64.23.149.40:22-139.178.89.65:41910.service - OpenSSH per-connection server daemon (139.178.89.65:41910). Nov 13 08:29:57.917853 systemd-logind[1457]: Removed session 27. Nov 13 08:29:57.991920 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 41910 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:57.993560 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:57.999465 systemd-logind[1457]: New session 28 of user core. Nov 13 08:29:58.008262 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 13 08:29:58.696456 sshd[4369]: Connection closed by 139.178.89.65 port 41910 Nov 13 08:29:58.697094 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:58.710881 systemd[1]: sshd@27-64.23.149.40:22-139.178.89.65:41910.service: Deactivated successfully. Nov 13 08:29:58.716487 systemd[1]: session-28.scope: Deactivated successfully. Nov 13 08:29:58.720497 systemd-logind[1457]: Session 28 logged out. Waiting for processes to exit. Nov 13 08:29:58.733868 systemd[1]: Started sshd@28-64.23.149.40:22-139.178.89.65:41916.service - OpenSSH per-connection server daemon (139.178.89.65:41916). Nov 13 08:29:58.739628 systemd-logind[1457]: Removed session 28. Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751262 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="mount-bpf-fs" Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751310 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="clean-cilium-state" Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751323 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="mount-cgroup" Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751333 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="apply-sysctl-overwrites" Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751343 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddbeb871-0476-4f5b-b1f2-18715a726bb9" containerName="cilium-operator" Nov 13 08:29:58.753764 kubelet[2579]: E1113 08:29:58.751354 2579 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="cilium-agent" Nov 13 08:29:58.753764 kubelet[2579]: I1113 08:29:58.751390 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="679deea7-1e31-4199-9f95-aecaa1339cc0" containerName="cilium-agent" Nov 13 08:29:58.753764 kubelet[2579]: I1113 08:29:58.751401 2579 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbeb871-0476-4f5b-b1f2-18715a726bb9" containerName="cilium-operator" Nov 13 08:29:58.768504 systemd[1]: Created slice kubepods-burstable-pod1f61bc40_c074_48c8_9bd0_c12e311d77c1.slice - libcontainer container kubepods-burstable-pod1f61bc40_c074_48c8_9bd0_c12e311d77c1.slice. Nov 13 08:29:58.782136 kubelet[2579]: W1113 08:29:58.782089 2579 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152.0.0-f-d2466dff01" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object Nov 13 08:29:58.783663 kubelet[2579]: E1113 08:29:58.783466 2579 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4152.0.0-f-d2466dff01\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object" logger="UnhandledError" Nov 13 08:29:58.783663 kubelet[2579]: W1113 08:29:58.782420 2579 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152.0.0-f-d2466dff01" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object Nov 13 08:29:58.783663 kubelet[2579]: E1113 08:29:58.783584 2579 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4152.0.0-f-d2466dff01\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object" logger="UnhandledError" Nov 13 08:29:58.783663 kubelet[2579]: W1113 08:29:58.782490 2579 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152.0.0-f-d2466dff01" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object Nov 13 08:29:58.784008 kubelet[2579]: E1113 08:29:58.783617 2579 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4152.0.0-f-d2466dff01\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object" logger="UnhandledError" Nov 13 08:29:58.784008 kubelet[2579]: W1113 08:29:58.782525 2579 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152.0.0-f-d2466dff01" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object Nov 13 08:29:58.784008 kubelet[2579]: E1113 08:29:58.783636 2579 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4152.0.0-f-d2466dff01\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.0.0-f-d2466dff01' and this object" logger="UnhandledError" Nov 13 08:29:58.805020 sshd[4379]: Accepted publickey for core from 139.178.89.65 port 41916 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:58.807092 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.813908 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-host-proc-sys-kernel\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.813974 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cni-path\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.814007 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-run\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.814034 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f61bc40-c074-48c8-9bd0-c12e311d77c1-hubble-tls\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.814062 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-etc-cni-netd\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.815337 kubelet[2579]: I1113 08:29:58.814091 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-host-proc-sys-net\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816768 kubelet[2579]: I1113 08:29:58.814119 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-xtables-lock\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816768 kubelet[2579]: I1113 08:29:58.814147 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-ipsec-secrets\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816768 kubelet[2579]: I1113 08:29:58.814173 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-lib-modules\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816768 kubelet[2579]: I1113 08:29:58.814198 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-clustermesh-secrets\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816768 kubelet[2579]: I1113 08:29:58.814228 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-config-path\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816999 kubelet[2579]: I1113 08:29:58.814254 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7h24\" (UniqueName: \"kubernetes.io/projected/1f61bc40-c074-48c8-9bd0-c12e311d77c1-kube-api-access-s7h24\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816999 kubelet[2579]: I1113 08:29:58.814280 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-bpf-maps\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816999 kubelet[2579]: I1113 08:29:58.814311 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-hostproc\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.816999 kubelet[2579]: I1113 08:29:58.814337 2579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-cgroup\") pod \"cilium-x5qvh\" (UID: \"1f61bc40-c074-48c8-9bd0-c12e311d77c1\") " pod="kube-system/cilium-x5qvh" Nov 13 08:29:58.818854 systemd-logind[1457]: New session 29 of user core. Nov 13 08:29:58.820995 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 13 08:29:58.885425 sshd[4381]: Connection closed by 139.178.89.65 port 41916 Nov 13 08:29:58.885270 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Nov 13 08:29:58.896617 systemd[1]: sshd@28-64.23.149.40:22-139.178.89.65:41916.service: Deactivated successfully. Nov 13 08:29:58.901915 systemd[1]: session-29.scope: Deactivated successfully. Nov 13 08:29:58.905836 systemd-logind[1457]: Session 29 logged out. Waiting for processes to exit. Nov 13 08:29:58.913226 systemd[1]: Started sshd@29-64.23.149.40:22-139.178.89.65:41928.service - OpenSSH per-connection server daemon (139.178.89.65:41928). Nov 13 08:29:58.915620 systemd-logind[1457]: Removed session 29. Nov 13 08:29:58.981598 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 41928 ssh2: RSA SHA256:y1vnoETCEWXko05A54YDR0hkA2v6GlVRkyD+bKoxHKM Nov 13 08:29:58.983173 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 13 08:29:58.990756 systemd-logind[1457]: New session 30 of user core. Nov 13 08:29:58.998042 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 13 08:29:59.185513 kubelet[2579]: E1113 08:29:59.185434 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917149 2579 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917181 2579 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917290 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-config-path podName:1f61bc40-c074-48c8-9bd0-c12e311d77c1 nodeName:}" failed. No retries permitted until 2024-11-13 08:30:00.417258922 +0000 UTC m=+111.698784556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-config-path") pod "cilium-x5qvh" (UID: "1f61bc40-c074-48c8-9bd0-c12e311d77c1") : failed to sync configmap cache: timed out waiting for the condition Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917303 2579 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917314 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-ipsec-secrets podName:1f61bc40-c074-48c8-9bd0-c12e311d77c1 nodeName:}" failed. No retries permitted until 2024-11-13 08:30:00.417303229 +0000 UTC m=+111.698828863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-cilium-ipsec-secrets") pod "cilium-x5qvh" (UID: "1f61bc40-c074-48c8-9bd0-c12e311d77c1") : failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.917438 kubelet[2579]: E1113 08:29:59.917321 2579 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-x5qvh: failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.918367 kubelet[2579]: E1113 08:29:59.917337 2579 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.918367 kubelet[2579]: E1113 08:29:59.917371 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f61bc40-c074-48c8-9bd0-c12e311d77c1-hubble-tls podName:1f61bc40-c074-48c8-9bd0-c12e311d77c1 nodeName:}" failed. No retries permitted until 2024-11-13 08:30:00.417350225 +0000 UTC m=+111.698875890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/1f61bc40-c074-48c8-9bd0-c12e311d77c1-hubble-tls") pod "cilium-x5qvh" (UID: "1f61bc40-c074-48c8-9bd0-c12e311d77c1") : failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.918367 kubelet[2579]: E1113 08:29:59.917394 2579 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-clustermesh-secrets podName:1f61bc40-c074-48c8-9bd0-c12e311d77c1 nodeName:}" failed. No retries permitted until 2024-11-13 08:30:00.417383862 +0000 UTC m=+111.698909496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1f61bc40-c074-48c8-9bd0-c12e311d77c1-clustermesh-secrets") pod "cilium-x5qvh" (UID: "1f61bc40-c074-48c8-9bd0-c12e311d77c1") : failed to sync secret cache: timed out waiting for the condition Nov 13 08:29:59.978089 kubelet[2579]: E1113 08:29:59.978019 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-7trhx" podUID="dcaad878-fa95-43c0-a15f-ba41a26c6aac" Nov 13 08:30:00.579270 kubelet[2579]: E1113 08:30:00.579174 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:00.582064 containerd[1482]: time="2024-11-13T08:30:00.581987189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5qvh,Uid:1f61bc40-c074-48c8-9bd0-c12e311d77c1,Namespace:kube-system,Attempt:0,}" Nov 13 08:30:00.619479 containerd[1482]: time="2024-11-13T08:30:00.619290573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 13 08:30:00.619479 containerd[1482]: time="2024-11-13T08:30:00.619403305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 13 08:30:00.619479 containerd[1482]: time="2024-11-13T08:30:00.619430687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:30:00.620815 containerd[1482]: time="2024-11-13T08:30:00.620635075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 13 08:30:00.651105 systemd[1]: Started cri-containerd-eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d.scope - libcontainer container eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d. Nov 13 08:30:00.691671 containerd[1482]: time="2024-11-13T08:30:00.691597410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x5qvh,Uid:1f61bc40-c074-48c8-9bd0-c12e311d77c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\"" Nov 13 08:30:00.693239 kubelet[2579]: E1113 08:30:00.693178 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:00.696558 containerd[1482]: time="2024-11-13T08:30:00.696476212Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 13 08:30:00.726183 containerd[1482]: time="2024-11-13T08:30:00.726084058Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8\"" Nov 13 08:30:00.726833 containerd[1482]: time="2024-11-13T08:30:00.726797718Z" level=info msg="StartContainer for \"5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8\"" Nov 13 08:30:00.769748 systemd[1]: Started cri-containerd-5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8.scope - libcontainer container 5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8. Nov 13 08:30:00.817271 containerd[1482]: time="2024-11-13T08:30:00.817027853Z" level=info msg="StartContainer for \"5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8\" returns successfully" Nov 13 08:30:00.836519 systemd[1]: cri-containerd-5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8.scope: Deactivated successfully. Nov 13 08:30:00.889909 containerd[1482]: time="2024-11-13T08:30:00.889785548Z" level=info msg="shim disconnected" id=5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8 namespace=k8s.io Nov 13 08:30:00.889909 containerd[1482]: time="2024-11-13T08:30:00.889920195Z" level=warning msg="cleaning up after shim disconnected" id=5e5a77b17cb5e9f1d68ca60fe36e7a6118c6a41fb75c55f7d48372b427682ee8 namespace=k8s.io Nov 13 08:30:00.890178 containerd[1482]: time="2024-11-13T08:30:00.889935493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:30:00.910951 containerd[1482]: time="2024-11-13T08:30:00.910822719Z" level=warning msg="cleanup warnings time=\"2024-11-13T08:30:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 13 08:30:01.459382 kubelet[2579]: E1113 08:30:01.459317 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:01.467233 containerd[1482]: time="2024-11-13T08:30:01.466697000Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 13 08:30:01.510685 containerd[1482]: time="2024-11-13T08:30:01.510392318Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27\"" Nov 13 08:30:01.513062 containerd[1482]: time="2024-11-13T08:30:01.512318902Z" level=info msg="StartContainer for \"4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27\"" Nov 13 08:30:01.618374 systemd[1]: Started cri-containerd-4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27.scope - libcontainer container 4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27. Nov 13 08:30:01.673793 containerd[1482]: time="2024-11-13T08:30:01.672897151Z" level=info msg="StartContainer for \"4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27\" returns successfully" Nov 13 08:30:01.681826 systemd[1]: cri-containerd-4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27.scope: Deactivated successfully. Nov 13 08:30:01.760682 containerd[1482]: time="2024-11-13T08:30:01.760464004Z" level=info msg="shim disconnected" id=4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27 namespace=k8s.io Nov 13 08:30:01.760682 containerd[1482]: time="2024-11-13T08:30:01.760531656Z" level=warning msg="cleaning up after shim disconnected" id=4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27 namespace=k8s.io Nov 13 08:30:01.760682 containerd[1482]: time="2024-11-13T08:30:01.760544739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:30:01.979210 kubelet[2579]: E1113 08:30:01.979101 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-7trhx" podUID="dcaad878-fa95-43c0-a15f-ba41a26c6aac" Nov 13 08:30:02.032884 kubelet[2579]: I1113 08:30:02.031901 2579 setters.go:600] "Node became not ready" node="ci-4152.0.0-f-d2466dff01" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-13T08:30:02Z","lastTransitionTime":"2024-11-13T08:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 13 08:30:02.440668 systemd[1]: run-containerd-runc-k8s.io-4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27-runc.qgfHZj.mount: Deactivated successfully. Nov 13 08:30:02.440889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0d9216027e9517f1ac4b5c5e9811634f76471e2a9d2206cf4b365867165e27-rootfs.mount: Deactivated successfully. Nov 13 08:30:02.465309 kubelet[2579]: E1113 08:30:02.465229 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:02.473772 containerd[1482]: time="2024-11-13T08:30:02.473292214Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 13 08:30:02.511061 containerd[1482]: time="2024-11-13T08:30:02.510907794Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6\"" Nov 13 08:30:02.514529 containerd[1482]: time="2024-11-13T08:30:02.514459065Z" level=info msg="StartContainer for \"d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6\"" Nov 13 08:30:02.581984 systemd[1]: Started cri-containerd-d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6.scope - libcontainer container d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6. Nov 13 08:30:02.637494 containerd[1482]: time="2024-11-13T08:30:02.637422129Z" level=info msg="StartContainer for \"d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6\" returns successfully" Nov 13 08:30:02.647738 systemd[1]: cri-containerd-d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6.scope: Deactivated successfully. Nov 13 08:30:02.701989 containerd[1482]: time="2024-11-13T08:30:02.701692484Z" level=info msg="shim disconnected" id=d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6 namespace=k8s.io Nov 13 08:30:02.701989 containerd[1482]: time="2024-11-13T08:30:02.701805818Z" level=warning msg="cleaning up after shim disconnected" id=d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6 namespace=k8s.io Nov 13 08:30:02.701989 containerd[1482]: time="2024-11-13T08:30:02.701818804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:30:03.448258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d09a0de2c3222d2226df5f176167597cfa9ec11670514197349b13468d7148f6-rootfs.mount: Deactivated successfully. Nov 13 08:30:03.477892 kubelet[2579]: E1113 08:30:03.471577 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:03.482400 containerd[1482]: time="2024-11-13T08:30:03.481510320Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 13 08:30:03.570606 containerd[1482]: time="2024-11-13T08:30:03.570533095Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2\"" Nov 13 08:30:03.574074 containerd[1482]: time="2024-11-13T08:30:03.573992839Z" level=info msg="StartContainer for \"42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2\"" Nov 13 08:30:03.631268 systemd[1]: Started cri-containerd-42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2.scope - libcontainer container 42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2. Nov 13 08:30:03.682743 systemd[1]: cri-containerd-42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2.scope: Deactivated successfully. Nov 13 08:30:03.687940 containerd[1482]: time="2024-11-13T08:30:03.687755011Z" level=info msg="StartContainer for \"42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2\" returns successfully" Nov 13 08:30:03.763933 containerd[1482]: time="2024-11-13T08:30:03.762649742Z" level=info msg="shim disconnected" id=42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2 namespace=k8s.io Nov 13 08:30:03.763933 containerd[1482]: time="2024-11-13T08:30:03.762783968Z" level=warning msg="cleaning up after shim disconnected" id=42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2 namespace=k8s.io Nov 13 08:30:03.763933 containerd[1482]: time="2024-11-13T08:30:03.762799052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 13 08:30:03.978613 kubelet[2579]: E1113 08:30:03.977865 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-7trhx" podUID="dcaad878-fa95-43c0-a15f-ba41a26c6aac" Nov 13 08:30:04.187493 kubelet[2579]: E1113 08:30:04.187334 2579 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 13 08:30:04.460302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ce75715933aedc4bde93b073368531b80940bf0b2e70dac5472534efb3f0a2-rootfs.mount: Deactivated successfully. Nov 13 08:30:04.497518 kubelet[2579]: E1113 08:30:04.497148 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:04.545226 containerd[1482]: time="2024-11-13T08:30:04.543875108Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 13 08:30:04.600303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278798383.mount: Deactivated successfully. Nov 13 08:30:04.609254 containerd[1482]: time="2024-11-13T08:30:04.609167124Z" level=info msg="CreateContainer within sandbox \"eb0f13b45b2dda145fdef35af848c3ec20c5742c50f10060f761d07704c5852d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4\"" Nov 13 08:30:04.611755 containerd[1482]: time="2024-11-13T08:30:04.610079911Z" level=info msg="StartContainer for \"3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4\"" Nov 13 08:30:04.696200 systemd[1]: Started cri-containerd-3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4.scope - libcontainer container 3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4. Nov 13 08:30:04.756249 containerd[1482]: time="2024-11-13T08:30:04.754111016Z" level=info msg="StartContainer for \"3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4\" returns successfully" Nov 13 08:30:05.512002 kubelet[2579]: E1113 08:30:05.511944 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:05.686697 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 13 08:30:05.977810 kubelet[2579]: E1113 08:30:05.977620 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-7trhx" podUID="dcaad878-fa95-43c0-a15f-ba41a26c6aac" Nov 13 08:30:06.591732 kubelet[2579]: E1113 08:30:06.591613 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:07.982586 kubelet[2579]: E1113 08:30:07.982148 2579 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-7trhx" podUID="dcaad878-fa95-43c0-a15f-ba41a26c6aac" Nov 13 08:30:08.133004 kubelet[2579]: E1113 08:30:08.132675 2579 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52988->127.0.0.1:44323: write tcp 127.0.0.1:52988->127.0.0.1:44323: write: broken pipe Nov 13 08:30:09.020592 containerd[1482]: time="2024-11-13T08:30:09.020352761Z" level=info msg="StopPodSandbox for \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\"" Nov 13 08:30:09.020592 containerd[1482]: time="2024-11-13T08:30:09.020506388Z" level=info msg="TearDown network for sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" successfully" Nov 13 08:30:09.020592 containerd[1482]: time="2024-11-13T08:30:09.020524977Z" level=info msg="StopPodSandbox for \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" returns successfully" Nov 13 08:30:09.022841 containerd[1482]: time="2024-11-13T08:30:09.021998698Z" level=info msg="RemovePodSandbox for \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\"" Nov 13 08:30:09.022841 containerd[1482]: time="2024-11-13T08:30:09.022063596Z" level=info msg="Forcibly stopping sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\"" Nov 13 08:30:09.022841 containerd[1482]: time="2024-11-13T08:30:09.022163642Z" level=info msg="TearDown network for sandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" successfully" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.046353345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.046524065Z" level=info msg="RemovePodSandbox \"3f658e37b7e673d7c6608e4c09b66a266aac529e942bc414e85007950fb127fd\" returns successfully" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.049489257Z" level=info msg="StopPodSandbox for \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\"" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.049651079Z" level=info msg="TearDown network for sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" successfully" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.049667065Z" level=info msg="StopPodSandbox for \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" returns successfully" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.050619578Z" level=info msg="RemovePodSandbox for \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\"" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.050659780Z" level=info msg="Forcibly stopping sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\"" Nov 13 08:30:09.053422 containerd[1482]: time="2024-11-13T08:30:09.050771345Z" level=info msg="TearDown network for sandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" successfully" Nov 13 08:30:09.058197 containerd[1482]: time="2024-11-13T08:30:09.058024834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 13 08:30:09.058197 containerd[1482]: time="2024-11-13T08:30:09.058133075Z" level=info msg="RemovePodSandbox \"226bdf547d85bb04067256bed163cc414c38c0678ee5e6b973c17eac2029915f\" returns successfully" Nov 13 08:30:09.986384 kubelet[2579]: E1113 08:30:09.977916 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:11.002351 systemd-networkd[1383]: lxc_health: Link UP Nov 13 08:30:11.020917 systemd-networkd[1383]: lxc_health: Gained carrier Nov 13 08:30:12.595727 kubelet[2579]: E1113 08:30:12.595644 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:12.611157 systemd-networkd[1383]: lxc_health: Gained IPv6LL Nov 13 08:30:12.643761 kubelet[2579]: I1113 08:30:12.643326 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x5qvh" podStartSLOduration=14.643192831 podStartE2EDuration="14.643192831s" podCreationTimestamp="2024-11-13 08:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-13 08:30:05.551781214 +0000 UTC m=+116.833306880" watchObservedRunningTime="2024-11-13 08:30:12.643192831 +0000 UTC m=+123.924718501" Nov 13 08:30:13.559284 kubelet[2579]: E1113 08:30:13.559058 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:13.978001 kubelet[2579]: E1113 08:30:13.977946 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 13 08:30:14.975013 systemd[1]: run-containerd-runc-k8s.io-3bc66930087d0d403e38e46945459c1ce5369fb24bd2caa97105897babf156b4-runc.SVfnt5.mount: Deactivated successfully. Nov 13 08:30:17.294756 sshd[4390]: Connection closed by 139.178.89.65 port 41928 Nov 13 08:30:17.296361 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Nov 13 08:30:17.304306 systemd[1]: sshd@29-64.23.149.40:22-139.178.89.65:41928.service: Deactivated successfully. Nov 13 08:30:17.308211 systemd[1]: session-30.scope: Deactivated successfully. Nov 13 08:30:17.313840 systemd-logind[1457]: Session 30 logged out. Waiting for processes to exit. Nov 13 08:30:17.316411 systemd-logind[1457]: Removed session 30.