Nov 12 20:48:17.055938 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:48:17.055967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:17.055980 kernel: BIOS-provided physical RAM map: Nov 12 20:48:17.055988 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:48:17.055994 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:48:17.056001 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:48:17.056010 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 12 20:48:17.056017 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 12 20:48:17.056024 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:48:17.056034 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:48:17.056045 kernel: NX (Execute Disable) protection: active Nov 12 20:48:17.056052 kernel: APIC: Static calls initialized Nov 12 20:48:17.056060 kernel: SMBIOS 2.8 present. Nov 12 20:48:17.056067 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 12 20:48:17.056076 kernel: Hypervisor detected: KVM Nov 12 20:48:17.056088 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:48:17.056096 kernel: kvm-clock: using sched offset of 3766190365 cycles Nov 12 20:48:17.056107 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:48:17.056116 kernel: tsc: Detected 2294.608 MHz processor Nov 12 20:48:17.056124 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:48:17.056133 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:48:17.056142 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 12 20:48:17.056150 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:48:17.056158 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:48:17.057238 kernel: ACPI: Early table checksum verification disabled Nov 12 20:48:17.057258 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 12 20:48:17.057276 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057295 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057313 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057331 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 12 20:48:17.057350 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057368 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057386 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057411 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:48:17.057430 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 12 20:48:17.057448 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 12 20:48:17.057466 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 12 20:48:17.057484 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 12 20:48:17.057502 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 12 20:48:17.057521 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 12 20:48:17.057573 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 12 20:48:17.057593 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:48:17.057612 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:48:17.057632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:48:17.057652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 12 20:48:17.057672 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 12 20:48:17.057692 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 12 20:48:17.057715 kernel: Zone ranges: Nov 12 20:48:17.057735 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:48:17.057755 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 12 20:48:17.057775 kernel: Normal empty Nov 12 20:48:17.057795 kernel: Movable zone start for each node Nov 12 20:48:17.057814 kernel: Early memory node ranges Nov 12 20:48:17.057834 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:48:17.057855 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 12 20:48:17.057875 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 12 20:48:17.057898 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:48:17.057920 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:48:17.057940 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 12 20:48:17.057959 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:48:17.057979 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:48:17.057998 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:48:17.058018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:48:17.058037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:48:17.058057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:48:17.058080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:48:17.058099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:48:17.058119 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:48:17.058138 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:48:17.058158 kernel: TSC deadline timer available Nov 12 20:48:17.059256 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:48:17.059277 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:48:17.059297 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 12 20:48:17.059316 kernel: Booting paravirtualized kernel on KVM Nov 12 20:48:17.059339 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:48:17.059372 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:48:17.059393 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:48:17.059412 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:48:17.059432 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:48:17.059454 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 12 20:48:17.059473 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:17.059486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:48:17.059506 kernel: random: crng init done Nov 12 20:48:17.059524 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:48:17.059538 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:48:17.059554 kernel: Fallback order for Node 0: 0 Nov 12 20:48:17.059563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 12 20:48:17.059572 kernel: Policy zone: DMA32 Nov 12 20:48:17.059582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:48:17.059591 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Nov 12 20:48:17.059600 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:48:17.059613 kernel: Kernel/User page tables isolation: enabled Nov 12 20:48:17.059621 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:48:17.059631 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:48:17.059640 kernel: Dynamic Preempt: voluntary Nov 12 20:48:17.059648 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:48:17.059658 kernel: rcu: RCU event tracing is enabled. Nov 12 20:48:17.059667 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:48:17.059676 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:48:17.060935 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:48:17.060950 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:48:17.060966 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:48:17.060976 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:48:17.060985 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:48:17.061000 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:48:17.061009 kernel: Console: colour VGA+ 80x25 Nov 12 20:48:17.061018 kernel: printk: console [tty0] enabled Nov 12 20:48:17.061027 kernel: printk: console [ttyS0] enabled Nov 12 20:48:17.061036 kernel: ACPI: Core revision 20230628 Nov 12 20:48:17.061045 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:48:17.061058 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:48:17.061067 kernel: x2apic enabled Nov 12 20:48:17.061076 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:48:17.061086 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:48:17.061095 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 12 20:48:17.061104 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Nov 12 20:48:17.061113 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 12 20:48:17.061123 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 12 20:48:17.061144 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:48:17.061153 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:48:17.061162 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:48:17.061186 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:48:17.061196 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 12 20:48:17.061205 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:48:17.061215 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:48:17.061224 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:48:17.061234 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:48:17.061250 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:48:17.061260 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:48:17.061269 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:48:17.061279 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:48:17.061288 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:48:17.061298 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:48:17.061307 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:48:17.061317 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:48:17.061329 kernel: landlock: Up and running. Nov 12 20:48:17.061339 kernel: SELinux: Initializing. Nov 12 20:48:17.061348 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:48:17.061358 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 12 20:48:17.061367 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 12 20:48:17.061377 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:17.061387 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:17.061396 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:48:17.061406 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 12 20:48:17.061418 kernel: signal: max sigframe size: 1776 Nov 12 20:48:17.061428 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:48:17.061437 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:48:17.061447 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:48:17.061456 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:48:17.061466 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:48:17.061477 kernel: .... node #0, CPUs: #1 Nov 12 20:48:17.061487 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:48:17.061496 kernel: smpboot: Max logical packages: 1 Nov 12 20:48:17.061509 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Nov 12 20:48:17.061519 kernel: devtmpfs: initialized Nov 12 20:48:17.061544 kernel: x86/mm: Memory block size: 128MB Nov 12 20:48:17.061562 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:48:17.061572 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:48:17.061581 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:48:17.061591 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:48:17.061600 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:48:17.061610 kernel: audit: type=2000 audit(1731444496.294:1): state=initialized audit_enabled=0 res=1 Nov 12 20:48:17.061623 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:48:17.061632 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:48:17.061642 kernel: cpuidle: using governor menu Nov 12 20:48:17.061651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:48:17.061661 kernel: dca service started, version 1.12.1 Nov 12 20:48:17.061670 kernel: PCI: Using configuration type 1 for base access Nov 12 20:48:17.061680 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:48:17.061689 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:48:17.061699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:48:17.061711 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:48:17.061720 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:48:17.061730 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:48:17.061739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:48:17.061749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:48:17.061758 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:48:17.061768 kernel: ACPI: Interpreter enabled Nov 12 20:48:17.061777 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:48:17.061787 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:48:17.061799 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:48:17.061809 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:48:17.061818 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 12 20:48:17.061827 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:48:17.062062 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:48:17.062193 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:48:17.062298 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:48:17.062316 kernel: acpiphp: Slot [3] registered Nov 12 20:48:17.062326 kernel: acpiphp: Slot [4] registered Nov 12 20:48:17.062335 kernel: acpiphp: Slot [5] registered Nov 12 20:48:17.062345 kernel: acpiphp: Slot [6] registered Nov 12 20:48:17.062354 kernel: acpiphp: Slot [7] registered Nov 12 20:48:17.062363 kernel: acpiphp: Slot [8] registered Nov 12 20:48:17.062373 kernel: acpiphp: Slot [9] registered Nov 12 20:48:17.062382 kernel: acpiphp: Slot [10] registered Nov 12 20:48:17.062391 kernel: acpiphp: Slot [11] registered Nov 12 20:48:17.062404 kernel: acpiphp: Slot [12] registered Nov 12 20:48:17.062413 kernel: acpiphp: Slot [13] registered Nov 12 20:48:17.062423 kernel: acpiphp: Slot [14] registered Nov 12 20:48:17.062432 kernel: acpiphp: Slot [15] registered Nov 12 20:48:17.062442 kernel: acpiphp: Slot [16] registered Nov 12 20:48:17.062457 kernel: acpiphp: Slot [17] registered Nov 12 20:48:17.062466 kernel: acpiphp: Slot [18] registered Nov 12 20:48:17.062476 kernel: acpiphp: Slot [19] registered Nov 12 20:48:17.062485 kernel: acpiphp: Slot [20] registered Nov 12 20:48:17.062494 kernel: acpiphp: Slot [21] registered Nov 12 20:48:17.062548 kernel: acpiphp: Slot [22] registered Nov 12 20:48:17.062558 kernel: acpiphp: Slot [23] registered Nov 12 20:48:17.062567 kernel: acpiphp: Slot [24] registered Nov 12 20:48:17.062576 kernel: acpiphp: Slot [25] registered Nov 12 20:48:17.062586 kernel: acpiphp: Slot [26] registered Nov 12 20:48:17.062595 kernel: acpiphp: Slot [27] registered Nov 12 20:48:17.062604 kernel: acpiphp: Slot [28] registered Nov 12 20:48:17.062614 kernel: acpiphp: Slot [29] registered Nov 12 20:48:17.062623 kernel: acpiphp: Slot [30] registered Nov 12 20:48:17.062636 kernel: acpiphp: Slot [31] registered Nov 12 20:48:17.062645 kernel: PCI host bridge to bus 0000:00 Nov 12 20:48:17.062781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:48:17.062875 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:48:17.062966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:48:17.063056 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 12 20:48:17.063145 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 12 20:48:17.063260 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:48:17.063394 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:48:17.063507 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 12 20:48:17.063623 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 12 20:48:17.063724 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 12 20:48:17.063828 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 12 20:48:17.063994 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 12 20:48:17.064105 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 12 20:48:17.064533 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 12 20:48:17.064677 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 12 20:48:17.064791 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 12 20:48:17.064896 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:48:17.065017 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 12 20:48:17.065275 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 12 20:48:17.065463 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 12 20:48:17.065623 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 12 20:48:17.065726 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 12 20:48:17.065963 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 12 20:48:17.066068 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 12 20:48:17.066176 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:48:17.066295 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:48:17.066395 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 12 20:48:17.067320 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 12 20:48:17.067433 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 12 20:48:17.067564 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:48:17.067675 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 12 20:48:17.067841 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 12 20:48:17.068005 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 12 20:48:17.069238 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 12 20:48:17.069418 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 12 20:48:17.069615 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 12 20:48:17.069721 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 12 20:48:17.071375 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:48:17.071505 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:48:17.071628 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 12 20:48:17.071729 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 12 20:48:17.071843 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:48:17.071941 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 12 20:48:17.072038 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 12 20:48:17.072139 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 12 20:48:17.072311 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 12 20:48:17.072417 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 12 20:48:17.072515 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 12 20:48:17.072528 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:48:17.072538 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:48:17.072548 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:48:17.072557 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:48:17.072567 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:48:17.072580 kernel: iommu: Default domain type: Translated Nov 12 20:48:17.072589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:48:17.072599 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:48:17.072609 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:48:17.072618 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:48:17.072627 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 12 20:48:17.072728 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 12 20:48:17.072826 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 12 20:48:17.072931 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:48:17.072943 kernel: vgaarb: loaded Nov 12 20:48:17.072953 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:48:17.072962 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:48:17.072972 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:48:17.072981 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:48:17.072991 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:48:17.073001 kernel: pnp: PnP ACPI init Nov 12 20:48:17.073011 kernel: pnp: PnP ACPI: found 4 devices Nov 12 20:48:17.073025 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:48:17.073035 kernel: NET: Registered PF_INET protocol family Nov 12 20:48:17.073044 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:48:17.073054 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 12 20:48:17.073063 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:48:17.073073 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:48:17.073082 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 12 20:48:17.073092 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 12 20:48:17.073101 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:48:17.073114 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 12 20:48:17.073123 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:48:17.073133 kernel: NET: Registered PF_XDP protocol family Nov 12 20:48:17.075260 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:48:17.075368 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:48:17.075458 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:48:17.075547 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 12 20:48:17.075647 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 12 20:48:17.075770 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 12 20:48:17.075876 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:48:17.075890 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 12 20:48:17.075991 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42064 usecs Nov 12 20:48:17.076009 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:48:17.076020 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:48:17.076030 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Nov 12 20:48:17.076039 kernel: Initialise system trusted keyrings Nov 12 20:48:17.076053 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 12 20:48:17.076063 kernel: Key type asymmetric registered Nov 12 20:48:17.076072 kernel: Asymmetric key parser 'x509' registered Nov 12 20:48:17.076082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:48:17.076092 kernel: io scheduler mq-deadline registered Nov 12 20:48:17.076102 kernel: io scheduler kyber registered Nov 12 20:48:17.076112 kernel: io scheduler bfq registered Nov 12 20:48:17.076121 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:48:17.076131 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 12 20:48:17.076140 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:48:17.076152 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:48:17.076162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:48:17.077227 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:48:17.077238 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:48:17.077247 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:48:17.077257 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:48:17.077267 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:48:17.077409 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 12 20:48:17.077512 kernel: rtc_cmos 00:03: registered as rtc0 Nov 12 20:48:17.077629 kernel: rtc_cmos 00:03: setting system clock to 2024-11-12T20:48:16 UTC (1731444496) Nov 12 20:48:17.077721 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 12 20:48:17.077733 kernel: intel_pstate: CPU model not supported Nov 12 20:48:17.077743 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:48:17.077753 kernel: Segment Routing with IPv6 Nov 12 20:48:17.077762 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:48:17.077772 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:48:17.077786 kernel: Key type dns_resolver registered Nov 12 20:48:17.077796 kernel: IPI shorthand broadcast: enabled Nov 12 20:48:17.077806 kernel: sched_clock: Marking stable (1082006391, 169016910)->(1431656051, -180632750) Nov 12 20:48:17.077815 kernel: registered taskstats version 1 Nov 12 20:48:17.077825 kernel: Loading compiled-in X.509 certificates Nov 12 20:48:17.077834 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:48:17.077844 kernel: Key type .fscrypt registered Nov 12 20:48:17.077853 kernel: Key type fscrypt-provisioning registered Nov 12 20:48:17.077863 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:48:17.077875 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:48:17.077884 kernel: ima: No architecture policies found Nov 12 20:48:17.077895 kernel: clk: Disabling unused clocks Nov 12 20:48:17.077904 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:48:17.077914 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:48:17.077942 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:48:17.077956 kernel: Run /init as init process Nov 12 20:48:17.077966 kernel: with arguments: Nov 12 20:48:17.077976 kernel: /init Nov 12 20:48:17.077988 kernel: with environment: Nov 12 20:48:17.077998 kernel: HOME=/ Nov 12 20:48:17.078008 kernel: TERM=linux Nov 12 20:48:17.078017 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:48:17.078030 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:48:17.078046 systemd[1]: Detected virtualization kvm. Nov 12 20:48:17.078057 systemd[1]: Detected architecture x86-64. Nov 12 20:48:17.078067 systemd[1]: Running in initrd. Nov 12 20:48:17.078080 systemd[1]: No hostname configured, using default hostname. Nov 12 20:48:17.078090 systemd[1]: Hostname set to . Nov 12 20:48:17.078101 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:48:17.078112 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:48:17.078122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:17.078133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:17.078144 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:48:17.078154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:48:17.079197 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:48:17.079210 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:48:17.079223 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:48:17.079234 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:48:17.079245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:17.079255 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:17.079271 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:48:17.079281 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:48:17.079292 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:48:17.079306 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:48:17.079317 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:48:17.079327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:48:17.079341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:48:17.079352 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:48:17.079363 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:17.079374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:17.079384 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:17.079395 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:48:17.079405 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:48:17.079416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:48:17.079429 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:48:17.079441 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:48:17.079451 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:48:17.079462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:48:17.079473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:17.079483 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:48:17.079528 systemd-journald[184]: Collecting audit messages is disabled. Nov 12 20:48:17.079556 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:17.079567 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:48:17.079597 systemd-journald[184]: Journal started Nov 12 20:48:17.079632 systemd-journald[184]: Runtime Journal (/run/log/journal/f7816c47f3294ccc9dbde03fbcf02171) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:48:17.071474 systemd-modules-load[185]: Inserted module 'overlay' Nov 12 20:48:17.129877 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:48:17.129923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:48:17.129969 kernel: Bridge firewalling registered Nov 12 20:48:17.119346 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 12 20:48:17.134219 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:48:17.135664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:17.136748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:17.145849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:17.154401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:17.157444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:48:17.165457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:48:17.169412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:48:17.187461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:17.188525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:17.198830 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:17.203537 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:48:17.206803 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:17.215504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:48:17.225032 dracut-cmdline[217]: dracut-dracut-053 Nov 12 20:48:17.232197 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:48:17.268514 systemd-resolved[220]: Positive Trust Anchors: Nov 12 20:48:17.268532 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:48:17.268631 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:48:17.275081 systemd-resolved[220]: Defaulting to hostname 'linux'. Nov 12 20:48:17.276763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:48:17.278857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:17.369246 kernel: SCSI subsystem initialized Nov 12 20:48:17.380233 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:48:17.395210 kernel: iscsi: registered transport (tcp) Nov 12 20:48:17.426365 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:48:17.426487 kernel: QLogic iSCSI HBA Driver Nov 12 20:48:17.487439 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:48:17.505356 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:48:17.541006 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:48:17.541094 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:48:17.543189 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:48:17.590226 kernel: raid6: avx2x4 gen() 17226 MB/s Nov 12 20:48:17.608224 kernel: raid6: avx2x2 gen() 17405 MB/s Nov 12 20:48:17.626971 kernel: raid6: avx2x1 gen() 12499 MB/s Nov 12 20:48:17.627086 kernel: raid6: using algorithm avx2x2 gen() 17405 MB/s Nov 12 20:48:17.645471 kernel: raid6: .... xor() 11816 MB/s, rmw enabled Nov 12 20:48:17.645615 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:48:17.671236 kernel: xor: automatically using best checksumming function avx Nov 12 20:48:17.868208 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:48:17.884017 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:48:17.890540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:17.924752 systemd-udevd[403]: Using default interface naming scheme 'v255'. Nov 12 20:48:17.935587 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:17.945774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:48:17.975418 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 12 20:48:18.023784 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:48:18.030532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:48:18.118281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:18.125434 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:48:18.157532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:48:18.166016 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:48:18.166861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:18.167633 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:48:18.178425 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:48:18.205121 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:48:18.244224 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 12 20:48:18.358299 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 12 20:48:18.358569 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:48:18.358802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:48:18.358831 kernel: GPT:9289727 != 125829119 Nov 12 20:48:18.358857 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:48:18.358882 kernel: GPT:9289727 != 125829119 Nov 12 20:48:18.358908 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:48:18.358941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:48:18.358968 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:48:18.358999 kernel: libata version 3.00 loaded. Nov 12 20:48:18.359027 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 12 20:48:18.359336 kernel: scsi host1: ata_piix Nov 12 20:48:18.359583 kernel: scsi host2: ata_piix Nov 12 20:48:18.359789 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 12 20:48:18.359818 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 12 20:48:18.359852 kernel: ACPI: bus type USB registered Nov 12 20:48:18.359878 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 12 20:48:18.377566 kernel: usbcore: registered new interface driver usbfs Nov 12 20:48:18.377602 kernel: usbcore: registered new interface driver hub Nov 12 20:48:18.377632 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:48:18.377651 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Nov 12 20:48:18.377851 kernel: usbcore: registered new device driver usb Nov 12 20:48:18.377873 kernel: AES CTR mode by8 optimization enabled Nov 12 20:48:18.320757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:48:18.320980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:18.323648 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:18.324697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:48:18.324969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:18.325711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:18.339014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:18.422707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:18.430538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:48:18.460568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:18.558222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Nov 12 20:48:18.577196 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 12 20:48:18.593344 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 12 20:48:18.593624 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (452) Nov 12 20:48:18.593650 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 12 20:48:18.593844 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 12 20:48:18.594026 kernel: hub 1-0:1.0: USB hub found Nov 12 20:48:18.594825 kernel: hub 1-0:1.0: 2 ports detected Nov 12 20:48:18.591833 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:48:18.603704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:48:18.614722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:48:18.623297 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:48:18.624210 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:48:18.632475 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:48:18.656357 disk-uuid[550]: Primary Header is updated. Nov 12 20:48:18.656357 disk-uuid[550]: Secondary Entries is updated. Nov 12 20:48:18.656357 disk-uuid[550]: Secondary Header is updated. Nov 12 20:48:18.668199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:48:18.676497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:48:18.689219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:48:19.688519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:48:19.688608 disk-uuid[551]: The operation has completed successfully. Nov 12 20:48:19.752219 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:48:19.753590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:48:19.778503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:48:19.784626 sh[564]: Success Nov 12 20:48:19.801636 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:48:19.885086 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:48:19.902708 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:48:19.903716 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:48:19.939446 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:48:19.939579 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:19.939618 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:48:19.941534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:48:19.943002 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:48:19.959052 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:48:19.961783 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:48:19.969535 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:48:19.973495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:48:19.998908 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:19.999003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:20.001083 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:48:20.007255 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:48:20.021635 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:48:20.023180 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:20.039271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:48:20.047506 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:48:20.136942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:48:20.151649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:48:20.216088 systemd-networkd[747]: lo: Link UP Nov 12 20:48:20.216102 systemd-networkd[747]: lo: Gained carrier Nov 12 20:48:20.220354 systemd-networkd[747]: Enumeration completed Nov 12 20:48:20.220923 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:48:20.220929 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 12 20:48:20.221306 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:48:20.222895 systemd[1]: Reached target network.target - Network. Nov 12 20:48:20.226528 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:20.226534 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:48:20.234213 ignition[670]: Ignition 2.19.0 Nov 12 20:48:20.228149 systemd-networkd[747]: eth0: Link UP Nov 12 20:48:20.234230 ignition[670]: Stage: fetch-offline Nov 12 20:48:20.228156 systemd-networkd[747]: eth0: Gained carrier Nov 12 20:48:20.234274 ignition[670]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:20.228189 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 12 20:48:20.234286 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:20.235235 systemd-networkd[747]: eth1: Link UP Nov 12 20:48:20.234455 ignition[670]: parsed url from cmdline: "" Nov 12 20:48:20.235242 systemd-networkd[747]: eth1: Gained carrier Nov 12 20:48:20.234460 ignition[670]: no config URL provided Nov 12 20:48:20.235261 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:48:20.234467 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:48:20.237443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:48:20.234478 ignition[670]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:48:20.247567 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:48:20.234486 ignition[670]: failed to fetch config: resource requires networking Nov 12 20:48:20.249288 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Nov 12 20:48:20.234845 ignition[670]: Ignition finished successfully Nov 12 20:48:20.254311 systemd-networkd[747]: eth0: DHCPv4 address 164.92.88.26/20, gateway 164.92.80.1 acquired from 169.254.169.253 Nov 12 20:48:20.288762 ignition[754]: Ignition 2.19.0 Nov 12 20:48:20.289901 ignition[754]: Stage: fetch Nov 12 20:48:20.290325 ignition[754]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:20.290345 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:20.290523 ignition[754]: parsed url from cmdline: "" Nov 12 20:48:20.290530 ignition[754]: no config URL provided Nov 12 20:48:20.290540 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:48:20.290553 ignition[754]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:48:20.290583 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 12 20:48:20.307849 ignition[754]: GET result: OK Nov 12 20:48:20.308865 ignition[754]: parsing config with SHA512: 67a17317e63960c8227089c6010baafdc541eddb3d2aedc0618f6108948bd3e021c568ded8943566dce6881d65876ea45bc9998e7b8dea859db79c86a263ec0c Nov 12 20:48:20.318585 unknown[754]: fetched base config from "system" Nov 12 20:48:20.318605 unknown[754]: fetched base config from "system" Nov 12 20:48:20.320082 ignition[754]: fetch: fetch complete Nov 12 20:48:20.318619 unknown[754]: fetched user config from "digitalocean" Nov 12 20:48:20.320102 ignition[754]: fetch: fetch passed Nov 12 20:48:20.320200 ignition[754]: Ignition finished successfully Nov 12 20:48:20.323589 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:48:20.330533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:48:20.369245 ignition[761]: Ignition 2.19.0 Nov 12 20:48:20.369261 ignition[761]: Stage: kargs Nov 12 20:48:20.369685 ignition[761]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:20.369702 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:20.373031 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:48:20.371338 ignition[761]: kargs: kargs passed Nov 12 20:48:20.371431 ignition[761]: Ignition finished successfully Nov 12 20:48:20.383545 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:48:20.420909 ignition[767]: Ignition 2.19.0 Nov 12 20:48:20.420925 ignition[767]: Stage: disks Nov 12 20:48:20.421255 ignition[767]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:20.421271 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:20.424903 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:48:20.422847 ignition[767]: disks: disks passed Nov 12 20:48:20.422937 ignition[767]: Ignition finished successfully Nov 12 20:48:20.432209 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:48:20.433372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:48:20.434772 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:48:20.436114 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:48:20.437525 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:48:20.463544 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:48:20.485353 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:48:20.495967 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:48:20.508406 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:48:20.630192 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:48:20.631024 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:48:20.632346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:48:20.640388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:48:20.649419 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:48:20.653679 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 12 20:48:20.656921 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:48:20.657964 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:48:20.658020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:48:20.665844 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:48:20.674211 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Nov 12 20:48:20.675872 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:48:20.685301 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:20.685347 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:20.685382 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:48:20.690207 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:48:20.696637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:48:20.786234 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:48:20.800738 coreos-metadata[785]: Nov 12 20:48:20.800 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:48:20.805216 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:48:20.806402 coreos-metadata[786]: Nov 12 20:48:20.805 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:48:20.812813 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:48:20.817556 coreos-metadata[785]: Nov 12 20:48:20.816 INFO Fetch successful Nov 12 20:48:20.819499 coreos-metadata[786]: Nov 12 20:48:20.819 INFO Fetch successful Nov 12 20:48:20.827662 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 12 20:48:20.832069 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:48:20.827798 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 12 20:48:20.835945 coreos-metadata[786]: Nov 12 20:48:20.835 INFO wrote hostname ci-4081.2.0-5-c2b3883be7 to /sysroot/etc/hostname Nov 12 20:48:20.838103 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:48:20.974653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:48:20.980399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:48:20.984414 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:48:20.999523 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:48:21.002346 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:21.031869 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:48:21.059712 ignition[905]: INFO : Ignition 2.19.0 Nov 12 20:48:21.059712 ignition[905]: INFO : Stage: mount Nov 12 20:48:21.061508 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:21.061508 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:21.063677 ignition[905]: INFO : mount: mount passed Nov 12 20:48:21.063677 ignition[905]: INFO : Ignition finished successfully Nov 12 20:48:21.064272 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:48:21.078368 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:48:21.097525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:48:21.128213 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Nov 12 20:48:21.133426 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:48:21.133525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:48:21.135299 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:48:21.141220 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:48:21.145697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:48:21.184211 ignition[935]: INFO : Ignition 2.19.0 Nov 12 20:48:21.184211 ignition[935]: INFO : Stage: files Nov 12 20:48:21.186069 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:21.186069 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:21.187981 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:48:21.188941 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:48:21.188941 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:48:21.192933 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:48:21.193989 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:48:21.193989 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:48:21.193671 unknown[935]: wrote ssh authorized keys file for user: core Nov 12 20:48:21.196900 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:48:21.196900 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:48:21.196900 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:48:21.196900 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:48:21.239925 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:48:21.352023 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:48:21.352023 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:48:21.354253 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 12 20:48:21.408424 systemd-networkd[747]: eth1: Gained IPv6LL Nov 12 20:48:21.702481 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 12 20:48:21.728347 systemd-networkd[747]: eth0: Gained IPv6LL Nov 12 20:48:21.781719 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:48:21.783050 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:48:21.801136 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:48:21.801136 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:48:21.801136 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:48:21.801136 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:48:21.801136 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:48:22.062852 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 12 20:48:22.717896 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:48:22.717896 ignition[935]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:48:22.721406 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:48:22.721406 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:48:22.721406 ignition[935]: INFO : files: files passed Nov 12 20:48:22.721406 ignition[935]: INFO : Ignition finished successfully Nov 12 20:48:22.721822 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:48:22.730448 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:48:22.737832 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:48:22.746123 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:48:22.747006 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:48:22.755792 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:22.755792 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:22.758357 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:48:22.760364 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:48:22.761862 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:48:22.768390 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:48:22.802207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:48:22.803235 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:48:22.804533 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:48:22.805176 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:48:22.806631 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:48:22.818461 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:48:22.837230 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:48:22.844494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:48:22.873660 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:22.875529 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:22.876536 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:48:22.877767 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:48:22.878054 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:48:22.879448 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:48:22.880949 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:48:22.881998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:48:22.883082 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:48:22.884305 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:48:22.885494 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:48:22.886722 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:48:22.888044 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:48:22.889175 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:48:22.890615 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:48:22.891501 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:48:22.891677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:48:22.892846 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:22.893571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:22.894722 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:48:22.894859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:22.895982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:48:22.896132 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:48:22.897831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:48:22.898045 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:48:22.899265 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:48:22.899502 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:48:22.900214 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:48:22.900327 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:48:22.912264 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:48:22.914429 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:48:22.916531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:48:22.916869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:22.918853 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:48:22.919026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:48:22.930315 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:48:22.930459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:48:22.938601 ignition[987]: INFO : Ignition 2.19.0 Nov 12 20:48:22.941047 ignition[987]: INFO : Stage: umount Nov 12 20:48:22.941047 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:48:22.941047 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 12 20:48:22.945431 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:48:22.950400 ignition[987]: INFO : umount: umount passed Nov 12 20:48:22.950400 ignition[987]: INFO : Ignition finished successfully Nov 12 20:48:22.945594 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:48:22.948002 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:48:22.948072 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:48:22.950970 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:48:22.951048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:48:22.951958 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:48:22.952013 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:48:22.953198 systemd[1]: Stopped target network.target - Network. Nov 12 20:48:22.954333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:48:22.954434 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:48:22.956075 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:48:22.959289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:48:22.961267 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:22.964274 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:48:22.965181 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:48:22.966522 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:48:22.966594 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:48:22.969247 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:48:22.969366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:48:22.970352 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:48:22.970450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:48:22.973275 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:48:22.973395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:48:22.975195 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:48:22.976214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:48:22.978261 systemd-networkd[747]: eth0: DHCPv6 lease lost Nov 12 20:48:22.981220 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:48:22.984271 systemd-networkd[747]: eth1: DHCPv6 lease lost Nov 12 20:48:22.988599 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:48:22.988767 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:48:22.993350 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:48:22.993532 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:48:22.997608 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:48:22.997810 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:48:23.001085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:48:23.001320 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:23.002575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:48:23.002664 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:48:23.013925 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:48:23.014552 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:48:23.014639 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:48:23.015335 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:48:23.015405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:23.016098 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:48:23.016154 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:23.017596 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:48:23.017680 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:23.019208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:23.034623 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:48:23.035775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:48:23.038768 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:48:23.039542 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:23.042611 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:48:23.042728 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:23.043453 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:48:23.043505 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:23.044741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:48:23.044810 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:48:23.046611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:48:23.046690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:48:23.047878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:48:23.047950 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:48:23.056652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:48:23.057940 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:48:23.058051 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:23.060680 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:48:23.060761 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:23.061526 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:48:23.061596 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:23.062197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:48:23.062264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:23.071598 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:48:23.071763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:48:23.073693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:48:23.085912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:48:23.099072 systemd[1]: Switching root. Nov 12 20:48:23.233673 systemd-journald[184]: Journal stopped Nov 12 20:48:24.950884 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 12 20:48:24.950965 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:48:24.950984 kernel: SELinux: policy capability open_perms=1 Nov 12 20:48:24.951001 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:48:24.951013 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:48:24.951025 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:48:24.951038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:48:24.951049 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:48:24.951062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:48:24.951080 kernel: audit: type=1403 audit(1731444503.522:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:48:24.951093 systemd[1]: Successfully loaded SELinux policy in 60.444ms. Nov 12 20:48:24.951125 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.592ms. Nov 12 20:48:24.951139 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:48:24.951152 systemd[1]: Detected virtualization kvm. Nov 12 20:48:24.951984 systemd[1]: Detected architecture x86-64. Nov 12 20:48:24.952029 systemd[1]: Detected first boot. Nov 12 20:48:24.952052 systemd[1]: Hostname set to . Nov 12 20:48:24.952075 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:48:24.952106 zram_generator::config[1046]: No configuration found. Nov 12 20:48:24.952149 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:48:24.952202 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:48:24.952230 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:48:24.952259 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:48:24.952286 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:48:24.952320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:48:24.952347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:48:24.952374 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:48:24.952401 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:48:24.952431 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:48:24.952458 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:48:24.952484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:48:24.952511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:48:24.952541 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:48:24.952567 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:48:24.952594 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:48:24.952622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:48:24.952648 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:48:24.952677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:48:24.952703 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:48:24.952729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:48:24.952757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:48:24.952784 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:48:24.952810 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:48:24.952841 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:48:24.952868 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:48:24.952895 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:48:24.952921 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:48:24.952948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:48:24.952975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:48:24.953001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:48:24.953028 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:48:24.953054 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:48:24.953081 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:48:24.953112 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:48:24.953139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:24.953177 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:48:24.953197 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:48:24.953219 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:48:24.953281 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:48:24.953305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:24.953325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:48:24.953350 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:48:24.953375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:24.953409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:48:24.953440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:24.953483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:48:24.953511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:24.955254 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:48:24.955294 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:48:24.955331 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:48:24.955359 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:48:24.955386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:48:24.955413 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:48:24.955440 kernel: fuse: init (API version 7.39) Nov 12 20:48:24.955467 kernel: ACPI: bus type drm_connector registered Nov 12 20:48:24.955493 kernel: loop: module loaded Nov 12 20:48:24.955519 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:48:24.955547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:48:24.955578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:24.955605 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:48:24.955632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:48:24.955660 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:48:24.955687 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:48:24.955714 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:48:24.955741 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:48:24.955769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:48:24.955797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:48:24.955827 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:48:24.955895 systemd-journald[1141]: Collecting audit messages is disabled. Nov 12 20:48:24.955942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:24.955969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:24.955997 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:48:24.956029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:48:24.956056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:24.956084 systemd-journald[1141]: Journal started Nov 12 20:48:24.956134 systemd-journald[1141]: Runtime Journal (/run/log/journal/f7816c47f3294ccc9dbde03fbcf02171) is 4.9M, max 39.3M, 34.4M free. Nov 12 20:48:24.958244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:24.962223 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:48:24.965624 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:48:24.965954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:48:24.967353 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:24.967631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:24.968966 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:48:24.971426 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:48:24.972669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:48:24.974118 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:48:24.990132 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:48:24.999401 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:48:25.004303 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:48:25.004930 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:48:25.016402 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:48:25.021258 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:48:25.023964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:48:25.038512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:48:25.040114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:48:25.045051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:48:25.059648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:48:25.073464 systemd-journald[1141]: Time spent on flushing to /var/log/journal/f7816c47f3294ccc9dbde03fbcf02171 is 111.848ms for 979 entries. Nov 12 20:48:25.073464 systemd-journald[1141]: System Journal (/var/log/journal/f7816c47f3294ccc9dbde03fbcf02171) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:48:25.210254 systemd-journald[1141]: Received client request to flush runtime journal. Nov 12 20:48:25.076727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:48:25.077990 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:48:25.081001 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:48:25.083758 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:48:25.135091 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 20:48:25.135131 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 12 20:48:25.147828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:48:25.149695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:48:25.157631 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:48:25.204054 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:48:25.216293 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:48:25.218948 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:48:25.221793 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:48:25.242467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:48:25.256263 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:48:25.278884 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 12 20:48:25.279465 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 12 20:48:25.287905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:48:26.311021 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:48:26.317518 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:48:26.355744 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Nov 12 20:48:26.392731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:48:26.403293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:48:26.440430 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:48:26.504199 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1227) Nov 12 20:48:26.513222 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1227) Nov 12 20:48:26.513882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:26.515481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:26.520618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:26.533248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:26.541041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:26.545303 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:48:26.545367 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:48:26.545434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:26.564880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:26.569469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:26.577412 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:48:26.579608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:26.579843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:26.585356 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:26.585624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:26.602027 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:48:26.605700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:48:26.605749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:48:26.626200 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1220) Nov 12 20:48:26.715227 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:48:26.720244 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 12 20:48:26.725215 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:48:26.735734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:48:26.740513 systemd-networkd[1223]: lo: Link UP Nov 12 20:48:26.740521 systemd-networkd[1223]: lo: Gained carrier Nov 12 20:48:26.743935 systemd-networkd[1223]: Enumeration completed Nov 12 20:48:26.744272 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:48:26.745414 systemd-networkd[1223]: eth0: Configuring with /run/systemd/network/10-7a:24:65:a7:c1:a7.network. Nov 12 20:48:26.748246 systemd-networkd[1223]: eth1: Configuring with /run/systemd/network/10-06:54:6b:9f:4c:51.network. Nov 12 20:48:26.748893 systemd-networkd[1223]: eth0: Link UP Nov 12 20:48:26.748960 systemd-networkd[1223]: eth0: Gained carrier Nov 12 20:48:26.750458 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:48:26.753746 systemd-networkd[1223]: eth1: Link UP Nov 12 20:48:26.753759 systemd-networkd[1223]: eth1: Gained carrier Nov 12 20:48:26.772188 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:48:26.823203 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:48:26.864213 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 12 20:48:26.864317 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 12 20:48:26.861370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:26.878314 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:48:26.878389 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 12 20:48:26.878405 kernel: [drm] features: -context_init Nov 12 20:48:26.880201 kernel: [drm] number of scanouts: 1 Nov 12 20:48:26.885202 kernel: [drm] number of cap sets: 0 Nov 12 20:48:26.893064 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 12 20:48:26.946681 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 12 20:48:26.946827 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:48:26.946862 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 12 20:48:26.959798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:48:26.960211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:26.969960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:48:27.020679 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:48:27.068553 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:48:27.075585 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:48:27.097148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:48:27.103704 lvm[1277]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:48:27.147097 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:48:27.148059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:48:27.162513 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:48:27.170805 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:48:27.207703 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:48:27.208818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:48:27.214505 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 12 20:48:27.214671 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:48:27.214714 systemd[1]: Reached target machines.target - Containers. Nov 12 20:48:27.217257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:48:27.233244 kernel: ISO 9660 Extensions: RRIP_1991A Nov 12 20:48:27.235875 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 12 20:48:27.238184 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:48:27.241980 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:48:27.248449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:48:27.259496 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:48:27.259847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:27.262809 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:48:27.274529 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:48:27.280851 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:48:27.290534 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:48:27.307457 kernel: loop0: detected capacity change from 0 to 140768 Nov 12 20:48:27.345275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:48:27.348948 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:48:27.362279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:48:27.381208 kernel: loop1: detected capacity change from 0 to 211296 Nov 12 20:48:27.438398 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:48:27.494590 kernel: loop3: detected capacity change from 0 to 8 Nov 12 20:48:27.519108 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:48:27.563211 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:48:27.588253 kernel: loop6: detected capacity change from 0 to 142488 Nov 12 20:48:27.617738 kernel: loop7: detected capacity change from 0 to 8 Nov 12 20:48:27.619093 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 12 20:48:27.619878 (sd-merge)[1308]: Merged extensions into '/usr'. Nov 12 20:48:27.629969 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:48:27.630004 systemd[1]: Reloading... Nov 12 20:48:27.704926 zram_generator::config[1333]: No configuration found. Nov 12 20:48:27.875535 systemd-networkd[1223]: eth0: Gained IPv6LL Nov 12 20:48:28.007315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:28.129659 systemd[1]: Reloading finished in 498 ms. Nov 12 20:48:28.135879 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:48:28.149087 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:48:28.151629 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:48:28.155119 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:48:28.169534 systemd[1]: Starting ensure-sysext.service... Nov 12 20:48:28.175433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:48:28.184435 systemd[1]: Reloading requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:48:28.184646 systemd[1]: Reloading... Nov 12 20:48:28.223614 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:48:28.224155 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:48:28.227806 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:48:28.228330 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Nov 12 20:48:28.228442 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Nov 12 20:48:28.233974 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:48:28.233992 systemd-tmpfiles[1390]: Skipping /boot Nov 12 20:48:28.254025 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:48:28.254047 systemd-tmpfiles[1390]: Skipping /boot Nov 12 20:48:28.325209 zram_generator::config[1417]: No configuration found. Nov 12 20:48:28.565526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:48:28.576364 systemd-networkd[1223]: eth1: Gained IPv6LL Nov 12 20:48:28.692150 systemd[1]: Reloading finished in 506 ms. Nov 12 20:48:28.726482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:48:28.743681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:28.753486 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:48:28.764576 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:48:28.781558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:48:28.790604 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:48:28.816296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:28.819875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:28.829318 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:28.848327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:28.858447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:28.859276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:28.859533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:28.882247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:28.884902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:28.892804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:28.893519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:28.907077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:28.910576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:28.912250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:28.915016 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:48:28.921396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:28.921637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:28.931005 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:28.932046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:28.932225 augenrules[1497]: No rules Nov 12 20:48:28.939940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:28.953890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:48:28.958058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:28.965480 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:28.983200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:28.984022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:48:28.992728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:48:29.005619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:48:29.024631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:48:29.044640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:48:29.050542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:48:29.063682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:48:29.067045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:48:29.075545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:48:29.083174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:48:29.084555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:48:29.089932 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:48:29.090545 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:48:29.094031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:48:29.094345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:48:29.097562 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:48:29.100494 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:48:29.105278 systemd-resolved[1478]: Positive Trust Anchors: Nov 12 20:48:29.105309 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:48:29.105370 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:48:29.116630 systemd[1]: Finished ensure-sysext.service. Nov 12 20:48:29.121959 systemd-resolved[1478]: Using system hostname 'ci-4081.2.0-5-c2b3883be7'. Nov 12 20:48:29.125159 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:48:29.130028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:48:29.138238 systemd[1]: Reached target network.target - Network. Nov 12 20:48:29.140152 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:48:29.140981 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:48:29.141789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:48:29.141929 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:48:29.157587 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:48:29.160442 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:48:29.249448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:48:29.250643 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:48:29.252419 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:48:29.253241 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:48:29.254263 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:48:29.255096 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:48:29.255148 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:48:29.256129 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:48:29.257408 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:48:29.259123 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:48:29.259791 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:48:29.262353 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:48:29.268037 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:48:29.273288 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:48:29.275655 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:48:29.276683 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:48:29.277541 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:48:29.278684 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:48:29.278921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:48:29.278973 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:48:29.293588 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:48:29.299418 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:48:29.308572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:48:29.324321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:48:29.342728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:48:29.346532 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:48:29.357476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:29.361206 jq[1544]: false Nov 12 20:48:29.370058 coreos-metadata[1540]: Nov 12 20:48:29.367 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:48:29.373586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:48:29.390238 coreos-metadata[1540]: Nov 12 20:48:29.388 INFO Fetch successful Nov 12 20:48:29.389267 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:48:29.397467 dbus-daemon[1541]: [system] SELinux support is enabled Nov 12 20:48:29.408652 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:48:29.424430 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:48:29.442348 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:48:29.455559 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:48:29.456944 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:48:29.468710 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:48:29.492227 extend-filesystems[1545]: Found loop4 Nov 12 20:48:29.495422 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:48:29.509503 extend-filesystems[1545]: Found loop5 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found loop6 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found loop7 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda1 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda2 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda3 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found usr Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda4 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda6 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda7 Nov 12 20:48:29.509503 extend-filesystems[1545]: Found vda9 Nov 12 20:48:29.509503 extend-filesystems[1545]: Checking size of /dev/vda9 Nov 12 20:48:30.653816 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 12 20:48:29.504508 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:48:30.670099 extend-filesystems[1545]: Resized partition /dev/vda9 Nov 12 20:48:30.610404 systemd-timesyncd[1534]: Contacted time server 216.31.17.12:123 (0.flatcar.pool.ntp.org). Nov 12 20:48:30.697167 extend-filesystems[1583]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:48:30.610480 systemd-timesyncd[1534]: Initial clock synchronization to Tue 2024-11-12 20:48:30.610156 UTC. Nov 12 20:48:30.707581 jq[1572]: true Nov 12 20:48:30.610559 systemd-resolved[1478]: Clock change detected. Flushing caches. Nov 12 20:48:30.622489 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:48:30.622991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:48:30.639373 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:48:30.643891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:48:30.645719 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:48:30.683165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:48:30.683577 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:48:30.729017 update_engine[1566]: I20241112 20:48:30.728387 1566 main.cc:92] Flatcar Update Engine starting Nov 12 20:48:30.739573 update_engine[1566]: I20241112 20:48:30.734467 1566 update_check_scheduler.cc:74] Next update check in 5m4s Nov 12 20:48:30.756503 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:48:30.777377 jq[1589]: true Nov 12 20:48:30.781478 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:48:30.823091 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:48:30.835774 tar[1585]: linux-amd64/helm Nov 12 20:48:30.825545 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:48:30.826208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:48:30.826263 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:48:30.828561 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:48:30.828811 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 12 20:48:30.828856 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:48:30.832339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:48:30.841051 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:48:30.862879 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 12 20:48:30.945784 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:48:30.945784 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 12 20:48:30.945784 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 12 20:48:30.964404 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1616) Nov 12 20:48:30.964442 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Nov 12 20:48:30.964442 extend-filesystems[1545]: Found vdb Nov 12 20:48:30.946828 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:48:30.947202 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:48:30.956413 systemd-logind[1560]: New seat seat0. Nov 12 20:48:30.961533 systemd-logind[1560]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:48:30.961560 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:48:30.962061 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:48:31.048675 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:48:31.055733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:48:31.077199 systemd[1]: Starting sshkeys.service... Nov 12 20:48:31.159060 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:48:31.173047 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:48:31.401785 coreos-metadata[1649]: Nov 12 20:48:31.400 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 12 20:48:31.413649 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:48:31.419859 coreos-metadata[1649]: Nov 12 20:48:31.419 INFO Fetch successful Nov 12 20:48:31.441777 unknown[1649]: wrote ssh authorized keys file for user: core Nov 12 20:48:31.487671 containerd[1590]: time="2024-11-12T20:48:31.487504169Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:48:31.509179 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:48:31.519502 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:48:31.523385 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:48:31.539658 systemd[1]: Finished sshkeys.service. Nov 12 20:48:31.592405 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:48:31.605173 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:48:31.610927 containerd[1590]: time="2024-11-12T20:48:31.607248609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.612242191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.612291996Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.612311343Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614162471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614206091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614293591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614311907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614794174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614820459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614837020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616353 containerd[1590]: time="2024-11-12T20:48:31.614847316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.615007148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.615286814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.616429935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.616462020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.616627859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:48:31.616886 containerd[1590]: time="2024-11-12T20:48:31.616735064Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:48:31.635293 containerd[1590]: time="2024-11-12T20:48:31.635224599Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:48:31.639214 containerd[1590]: time="2024-11-12T20:48:31.637972931Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:48:31.639214 containerd[1590]: time="2024-11-12T20:48:31.638034341Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:48:31.639214 containerd[1590]: time="2024-11-12T20:48:31.638062072Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:48:31.639214 containerd[1590]: time="2024-11-12T20:48:31.638087323Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:48:31.639214 containerd[1590]: time="2024-11-12T20:48:31.638396847Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:48:31.641615 containerd[1590]: time="2024-11-12T20:48:31.640481119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644191194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644255318Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644279196Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644308493Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644333666Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644358400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644385185Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644410246Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644434943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644456500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644478574Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644537096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644570037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.644729 containerd[1590]: time="2024-11-12T20:48:31.644588872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.645401 containerd[1590]: time="2024-11-12T20:48:31.644617364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.645401 containerd[1590]: time="2024-11-12T20:48:31.644651647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.644674081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.647938398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.647986234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648010201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648035134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648052849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648073789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648093395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648116489Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648149958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648166438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648181483Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648245677Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:48:31.649158 containerd[1590]: time="2024-11-12T20:48:31.648272025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648288674Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648305418Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648320019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648344311Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648360430Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:48:31.649904 containerd[1590]: time="2024-11-12T20:48:31.648378301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:48:31.652826 containerd[1590]: time="2024-11-12T20:48:31.651898326Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:48:31.652826 containerd[1590]: time="2024-11-12T20:48:31.652060774Z" level=info msg="Connect containerd service" Nov 12 20:48:31.652826 containerd[1590]: time="2024-11-12T20:48:31.652152282Z" level=info msg="using legacy CRI server" Nov 12 20:48:31.652826 containerd[1590]: time="2024-11-12T20:48:31.652164950Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:48:31.652826 containerd[1590]: time="2024-11-12T20:48:31.652340037Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:48:31.658367 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:48:31.660377 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:48:31.670402 containerd[1590]: time="2024-11-12T20:48:31.663495234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:48:31.672990 containerd[1590]: time="2024-11-12T20:48:31.672869382Z" level=info msg="Start subscribing containerd event" Nov 12 20:48:31.679107 containerd[1590]: time="2024-11-12T20:48:31.677284780Z" level=info msg="Start recovering state" Nov 12 20:48:31.679107 containerd[1590]: time="2024-11-12T20:48:31.677518698Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:48:31.678879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:48:31.682581 containerd[1590]: time="2024-11-12T20:48:31.682393349Z" level=info msg="Start event monitor" Nov 12 20:48:31.683640 containerd[1590]: time="2024-11-12T20:48:31.682786803Z" level=info msg="Start snapshots syncer" Nov 12 20:48:31.683640 containerd[1590]: time="2024-11-12T20:48:31.682840962Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:48:31.683640 containerd[1590]: time="2024-11-12T20:48:31.682856440Z" level=info msg="Start streaming server" Nov 12 20:48:31.686054 containerd[1590]: time="2024-11-12T20:48:31.685032174Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:48:31.686054 containerd[1590]: time="2024-11-12T20:48:31.685144609Z" level=info msg="containerd successfully booted in 0.202276s" Nov 12 20:48:31.685738 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:48:31.728392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:48:31.749196 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:48:31.766650 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:48:31.769341 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:48:32.192113 tar[1585]: linux-amd64/LICENSE Nov 12 20:48:32.192635 tar[1585]: linux-amd64/README.md Nov 12 20:48:32.215449 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:48:32.695005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:32.697992 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:48:32.700078 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:32.701601 systemd[1]: Startup finished in 8.010s (kernel) + 8.164s (userspace) = 16.174s. Nov 12 20:48:33.706095 kubelet[1704]: E1112 20:48:33.705922 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:33.711349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:33.711652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:37.942507 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:48:37.949243 systemd[1]: Started sshd@0-164.92.88.26:22-139.178.68.195:56428.service - OpenSSH per-connection server daemon (139.178.68.195:56428). Nov 12 20:48:38.036174 sshd[1717]: Accepted publickey for core from 139.178.68.195 port 56428 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:38.039964 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:38.058780 systemd-logind[1560]: New session 1 of user core. Nov 12 20:48:38.059854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:48:38.072210 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:48:38.094922 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:48:38.107256 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:48:38.114128 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:48:38.255047 systemd[1723]: Queued start job for default target default.target. Nov 12 20:48:38.256128 systemd[1723]: Created slice app.slice - User Application Slice. Nov 12 20:48:38.256168 systemd[1723]: Reached target paths.target - Paths. Nov 12 20:48:38.256195 systemd[1723]: Reached target timers.target - Timers. Nov 12 20:48:38.267947 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:48:38.276809 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:48:38.278126 systemd[1723]: Reached target sockets.target - Sockets. Nov 12 20:48:38.278161 systemd[1723]: Reached target basic.target - Basic System. Nov 12 20:48:38.278233 systemd[1723]: Reached target default.target - Main User Target. Nov 12 20:48:38.278273 systemd[1723]: Startup finished in 155ms. Nov 12 20:48:38.278561 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:48:38.284448 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:48:38.353241 systemd[1]: Started sshd@1-164.92.88.26:22-139.178.68.195:56434.service - OpenSSH per-connection server daemon (139.178.68.195:56434). Nov 12 20:48:38.419152 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 56434 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:38.421583 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:38.430328 systemd-logind[1560]: New session 2 of user core. Nov 12 20:48:38.437325 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:48:38.505710 sshd[1735]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:38.519215 systemd[1]: Started sshd@2-164.92.88.26:22-139.178.68.195:56446.service - OpenSSH per-connection server daemon (139.178.68.195:56446). Nov 12 20:48:38.520342 systemd[1]: sshd@1-164.92.88.26:22-139.178.68.195:56434.service: Deactivated successfully. Nov 12 20:48:38.528049 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:48:38.531106 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:48:38.534221 systemd-logind[1560]: Removed session 2. Nov 12 20:48:38.579438 sshd[1741]: Accepted publickey for core from 139.178.68.195 port 56446 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:38.582394 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:38.595329 systemd-logind[1560]: New session 3 of user core. Nov 12 20:48:38.601202 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:48:38.662940 sshd[1741]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:38.674117 systemd[1]: Started sshd@3-164.92.88.26:22-139.178.68.195:56456.service - OpenSSH per-connection server daemon (139.178.68.195:56456). Nov 12 20:48:38.674960 systemd[1]: sshd@2-164.92.88.26:22-139.178.68.195:56446.service: Deactivated successfully. Nov 12 20:48:38.677386 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:48:38.680252 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:48:38.683333 systemd-logind[1560]: Removed session 3. Nov 12 20:48:38.726128 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 56456 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:38.728416 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:38.737339 systemd-logind[1560]: New session 4 of user core. Nov 12 20:48:38.743188 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:48:38.811521 sshd[1748]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:38.818003 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:48:38.818924 systemd[1]: sshd@3-164.92.88.26:22-139.178.68.195:56456.service: Deactivated successfully. Nov 12 20:48:38.823108 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:48:38.831266 systemd[1]: Started sshd@4-164.92.88.26:22-139.178.68.195:56458.service - OpenSSH per-connection server daemon (139.178.68.195:56458). Nov 12 20:48:38.833026 systemd-logind[1560]: Removed session 4. Nov 12 20:48:38.889357 sshd[1759]: Accepted publickey for core from 139.178.68.195 port 56458 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:38.892084 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:38.899741 systemd-logind[1560]: New session 5 of user core. Nov 12 20:48:38.907210 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:48:38.989301 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:48:38.989809 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:39.011405 sudo[1763]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:39.017039 sshd[1759]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:39.032348 systemd[1]: Started sshd@5-164.92.88.26:22-139.178.68.195:56474.service - OpenSSH per-connection server daemon (139.178.68.195:56474). Nov 12 20:48:39.035358 systemd[1]: sshd@4-164.92.88.26:22-139.178.68.195:56458.service: Deactivated successfully. Nov 12 20:48:39.041822 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:48:39.043501 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:48:39.045857 systemd-logind[1560]: Removed session 5. Nov 12 20:48:39.086398 sshd[1765]: Accepted publickey for core from 139.178.68.195 port 56474 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:39.089433 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:39.098086 systemd-logind[1560]: New session 6 of user core. Nov 12 20:48:39.104278 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:48:39.170813 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:48:39.171917 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:39.177721 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:39.185347 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:48:39.186147 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:39.205125 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:39.215719 auditctl[1776]: No rules Nov 12 20:48:39.216330 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:48:39.216742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:39.226369 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:48:39.265180 augenrules[1795]: No rules Nov 12 20:48:39.266587 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:48:39.269978 sudo[1772]: pam_unix(sudo:session): session closed for user root Nov 12 20:48:39.274903 sshd[1765]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:39.283139 systemd[1]: Started sshd@6-164.92.88.26:22-139.178.68.195:56484.service - OpenSSH per-connection server daemon (139.178.68.195:56484). Nov 12 20:48:39.285599 systemd[1]: sshd@5-164.92.88.26:22-139.178.68.195:56474.service: Deactivated successfully. Nov 12 20:48:39.288434 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:48:39.292766 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:48:39.294549 systemd-logind[1560]: Removed session 6. Nov 12 20:48:39.335237 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 56484 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:48:39.337562 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:39.344949 systemd-logind[1560]: New session 7 of user core. Nov 12 20:48:39.353246 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:48:39.418107 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:48:39.418574 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:48:40.015326 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:48:40.020023 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:48:40.602901 dockerd[1823]: time="2024-11-12T20:48:40.602831101Z" level=info msg="Starting up" Nov 12 20:48:40.927120 dockerd[1823]: time="2024-11-12T20:48:40.926495247Z" level=info msg="Loading containers: start." Nov 12 20:48:41.084732 kernel: Initializing XFRM netlink socket Nov 12 20:48:41.195296 systemd-networkd[1223]: docker0: Link UP Nov 12 20:48:41.240050 dockerd[1823]: time="2024-11-12T20:48:41.240001299Z" level=info msg="Loading containers: done." Nov 12 20:48:41.275749 dockerd[1823]: time="2024-11-12T20:48:41.273788568Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:48:41.275749 dockerd[1823]: time="2024-11-12T20:48:41.273977015Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:48:41.275749 dockerd[1823]: time="2024-11-12T20:48:41.274167279Z" level=info msg="Daemon has completed initialization" Nov 12 20:48:41.359174 dockerd[1823]: time="2024-11-12T20:48:41.359062582Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:48:41.359923 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:48:42.570154 containerd[1590]: time="2024-11-12T20:48:42.570071864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:48:43.343300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848614294.mount: Deactivated successfully. Nov 12 20:48:43.897235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:48:43.910029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:44.117862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:44.136167 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:44.231261 kubelet[2015]: E1112 20:48:44.231009 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:44.236393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:44.236653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:45.458918 containerd[1590]: time="2024-11-12T20:48:45.458806881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:45.463765 containerd[1590]: time="2024-11-12T20:48:45.463655339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:48:45.466925 containerd[1590]: time="2024-11-12T20:48:45.466812805Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:45.475015 containerd[1590]: time="2024-11-12T20:48:45.474949725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:45.478732 containerd[1590]: time="2024-11-12T20:48:45.477482948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.907318717s" Nov 12 20:48:45.478732 containerd[1590]: time="2024-11-12T20:48:45.477539785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:48:45.519067 containerd[1590]: time="2024-11-12T20:48:45.519003234Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:48:47.486027 containerd[1590]: time="2024-11-12T20:48:47.485912390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.490077 containerd[1590]: time="2024-11-12T20:48:47.489976808Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:48:47.492828 containerd[1590]: time="2024-11-12T20:48:47.492735266Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.500508 containerd[1590]: time="2024-11-12T20:48:47.500407631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:47.503436 containerd[1590]: time="2024-11-12T20:48:47.503173714Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 1.984108541s" Nov 12 20:48:47.503436 containerd[1590]: time="2024-11-12T20:48:47.503258524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:48:47.540062 containerd[1590]: time="2024-11-12T20:48:47.539626650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:48:48.879466 containerd[1590]: time="2024-11-12T20:48:48.879387107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:48.884958 containerd[1590]: time="2024-11-12T20:48:48.884846376Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:48:48.888041 containerd[1590]: time="2024-11-12T20:48:48.887899639Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:48.894415 containerd[1590]: time="2024-11-12T20:48:48.893457788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:48.895607 containerd[1590]: time="2024-11-12T20:48:48.895534263Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.355853398s" Nov 12 20:48:48.895607 containerd[1590]: time="2024-11-12T20:48:48.895608047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:48:48.936471 containerd[1590]: time="2024-11-12T20:48:48.936417172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:48:48.941230 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 12 20:48:50.148900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924874037.mount: Deactivated successfully. Nov 12 20:48:50.755584 containerd[1590]: time="2024-11-12T20:48:50.755462863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:50.757918 containerd[1590]: time="2024-11-12T20:48:50.757852938Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:48:50.760923 containerd[1590]: time="2024-11-12T20:48:50.760806404Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:50.767961 containerd[1590]: time="2024-11-12T20:48:50.767848657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:50.769446 containerd[1590]: time="2024-11-12T20:48:50.769273822Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.83280182s" Nov 12 20:48:50.769446 containerd[1590]: time="2024-11-12T20:48:50.769329265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:48:50.818903 containerd[1590]: time="2024-11-12T20:48:50.818831943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:48:51.529810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834776049.mount: Deactivated successfully. Nov 12 20:48:52.049976 systemd-resolved[1478]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 12 20:48:52.942167 containerd[1590]: time="2024-11-12T20:48:52.942041585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:52.945419 containerd[1590]: time="2024-11-12T20:48:52.945330928Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:48:52.949248 containerd[1590]: time="2024-11-12T20:48:52.949148716Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:52.957723 containerd[1590]: time="2024-11-12T20:48:52.956438518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:52.958517 containerd[1590]: time="2024-11-12T20:48:52.958464492Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.139567595s" Nov 12 20:48:52.958674 containerd[1590]: time="2024-11-12T20:48:52.958653353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:48:53.006807 containerd[1590]: time="2024-11-12T20:48:53.006770370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:48:53.684818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683396926.mount: Deactivated successfully. Nov 12 20:48:53.709913 containerd[1590]: time="2024-11-12T20:48:53.709797400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.713979 containerd[1590]: time="2024-11-12T20:48:53.713837011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:48:53.717454 containerd[1590]: time="2024-11-12T20:48:53.717381204Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.722217 containerd[1590]: time="2024-11-12T20:48:53.722138953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:53.723445 containerd[1590]: time="2024-11-12T20:48:53.723236598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 716.022066ms" Nov 12 20:48:53.723445 containerd[1590]: time="2024-11-12T20:48:53.723298436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:48:53.763467 containerd[1590]: time="2024-11-12T20:48:53.763418510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:48:54.334246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:48:54.344114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:48:54.352236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403041818.mount: Deactivated successfully. Nov 12 20:48:54.535046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:48:54.538538 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:48:54.629432 kubelet[2156]: E1112 20:48:54.629247 2156 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:48:54.636586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:48:54.637536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:48:56.924820 containerd[1590]: time="2024-11-12T20:48:56.924718346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:56.928671 containerd[1590]: time="2024-11-12T20:48:56.928550510Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:48:56.931894 containerd[1590]: time="2024-11-12T20:48:56.931782644Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:56.938213 containerd[1590]: time="2024-11-12T20:48:56.938115816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:56.941034 containerd[1590]: time="2024-11-12T20:48:56.940499725Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.176776279s" Nov 12 20:48:56.941034 containerd[1590]: time="2024-11-12T20:48:56.940573281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:49:00.512327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:00.529230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:00.575649 systemd[1]: Reloading requested from client PID 2274 ('systemctl') (unit session-7.scope)... Nov 12 20:49:00.575903 systemd[1]: Reloading... Nov 12 20:49:00.751717 zram_generator::config[2315]: No configuration found. Nov 12 20:49:00.955030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:01.110059 systemd[1]: Reloading finished in 533 ms. Nov 12 20:49:01.170585 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:49:01.170730 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:49:01.171177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:01.186463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:01.347996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:01.363065 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:49:01.445217 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:01.445217 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:49:01.445217 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:01.446088 kubelet[2377]: I1112 20:49:01.445283 2377 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:49:02.178896 kubelet[2377]: I1112 20:49:02.178263 2377 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:49:02.178896 kubelet[2377]: I1112 20:49:02.178334 2377 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:49:02.178896 kubelet[2377]: I1112 20:49:02.178726 2377 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:49:02.228529 kubelet[2377]: I1112 20:49:02.228468 2377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:49:02.230400 kubelet[2377]: E1112 20:49:02.230174 2377 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.88.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.256282 kubelet[2377]: I1112 20:49:02.256213 2377 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:49:02.258279 kubelet[2377]: I1112 20:49:02.258205 2377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:49:02.260061 kubelet[2377]: I1112 20:49:02.259964 2377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:49:02.260061 kubelet[2377]: I1112 20:49:02.260039 2377 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:49:02.260061 kubelet[2377]: I1112 20:49:02.260061 2377 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:49:02.260457 kubelet[2377]: I1112 20:49:02.260283 2377 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:02.260510 kubelet[2377]: I1112 20:49:02.260476 2377 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:49:02.261480 kubelet[2377]: I1112 20:49:02.261051 2377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:49:02.261480 kubelet[2377]: I1112 20:49:02.261113 2377 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:49:02.261480 kubelet[2377]: I1112 20:49:02.261150 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:49:02.261480 kubelet[2377]: W1112 20:49:02.261221 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://164.92.88.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-5-c2b3883be7&limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.261480 kubelet[2377]: E1112 20:49:02.261305 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.88.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-5-c2b3883be7&limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.263327 kubelet[2377]: W1112 20:49:02.263124 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://164.92.88.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.263327 kubelet[2377]: E1112 20:49:02.263202 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.88.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.264720 kubelet[2377]: I1112 20:49:02.264158 2377 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:49:02.273712 kubelet[2377]: I1112 20:49:02.273098 2377 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:49:02.273712 kubelet[2377]: W1112 20:49:02.273243 2377 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:49:02.275554 kubelet[2377]: I1112 20:49:02.275514 2377 server.go:1256] "Started kubelet" Nov 12 20:49:02.277974 kubelet[2377]: I1112 20:49:02.277934 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:49:02.288947 kubelet[2377]: E1112 20:49:02.288898 2377 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.88.26:6443/api/v1/namespaces/default/events\": dial tcp 164.92.88.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-5-c2b3883be7.180753a33a9e1a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-5-c2b3883be7,UID:ci-4081.2.0-5-c2b3883be7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-5-c2b3883be7,},FirstTimestamp:2024-11-12 20:49:02.275459686 +0000 UTC m=+0.900710827,LastTimestamp:2024-11-12 20:49:02.275459686 +0000 UTC m=+0.900710827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-5-c2b3883be7,}" Nov 12 20:49:02.291147 kubelet[2377]: I1112 20:49:02.290357 2377 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:49:02.292520 kubelet[2377]: I1112 20:49:02.292480 2377 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:49:02.293639 kubelet[2377]: I1112 20:49:02.293592 2377 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:49:02.294475 kubelet[2377]: I1112 20:49:02.294451 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:49:02.294878 kubelet[2377]: I1112 20:49:02.294857 2377 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:49:02.300227 kubelet[2377]: I1112 20:49:02.298924 2377 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:49:02.300227 kubelet[2377]: I1112 20:49:02.299065 2377 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:49:02.300227 kubelet[2377]: E1112 20:49:02.299364 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.88.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-5-c2b3883be7?timeout=10s\": dial tcp 164.92.88.26:6443: connect: connection refused" interval="200ms" Nov 12 20:49:02.300227 kubelet[2377]: E1112 20:49:02.299464 2377 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:49:02.300227 kubelet[2377]: I1112 20:49:02.299640 2377 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:49:02.300227 kubelet[2377]: I1112 20:49:02.299763 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:49:02.307536 kubelet[2377]: W1112 20:49:02.304962 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://164.92.88.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.307536 kubelet[2377]: E1112 20:49:02.305067 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.88.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.307536 kubelet[2377]: I1112 20:49:02.306012 2377 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:49:02.321433 kubelet[2377]: I1112 20:49:02.321160 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:49:02.327811 kubelet[2377]: I1112 20:49:02.326659 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:49:02.327811 kubelet[2377]: I1112 20:49:02.326754 2377 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:49:02.327811 kubelet[2377]: I1112 20:49:02.326789 2377 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:49:02.327811 kubelet[2377]: E1112 20:49:02.326876 2377 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:49:02.362175 kubelet[2377]: W1112 20:49:02.362075 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://164.92.88.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.362175 kubelet[2377]: E1112 20:49:02.362176 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.88.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:02.370245 kubelet[2377]: I1112 20:49:02.370211 2377 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:49:02.370542 kubelet[2377]: I1112 20:49:02.370521 2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:49:02.371053 kubelet[2377]: I1112 20:49:02.370746 2377 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:02.376600 kubelet[2377]: I1112 20:49:02.376366 2377 policy_none.go:49] "None policy: Start" Nov 12 20:49:02.378987 kubelet[2377]: I1112 20:49:02.378949 2377 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:49:02.379435 kubelet[2377]: I1112 20:49:02.379244 2377 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:49:02.397072 kubelet[2377]: I1112 20:49:02.396123 2377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:49:02.397072 kubelet[2377]: I1112 20:49:02.396619 2377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.397072 kubelet[2377]: I1112 20:49:02.396821 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:49:02.397776 kubelet[2377]: E1112 20:49:02.397749 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.88.26:6443/api/v1/nodes\": dial tcp 164.92.88.26:6443: connect: connection refused" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.400940 kubelet[2377]: E1112 20:49:02.400885 2377 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-5-c2b3883be7\" not found" Nov 12 20:49:02.428570 kubelet[2377]: I1112 20:49:02.427855 2377 topology_manager.go:215] "Topology Admit Handler" podUID="143b944d8a938fefc742bfa1dfbd7bb3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.429768 kubelet[2377]: I1112 20:49:02.429615 2377 topology_manager.go:215] "Topology Admit Handler" podUID="22d9f3f3e31419233eeaf8434e20308d" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.433106 kubelet[2377]: I1112 20:49:02.433055 2377 topology_manager.go:215] "Topology Admit Handler" podUID="a917e4d6fcf8cbbef5ed9938f08123fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.499958 kubelet[2377]: E1112 20:49:02.499917 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.88.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-5-c2b3883be7?timeout=10s\": dial tcp 164.92.88.26:6443: connect: connection refused" interval="400ms" Nov 12 20:49:02.603719 kubelet[2377]: I1112 20:49:02.603595 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.603719 kubelet[2377]: I1112 20:49:02.603701 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.603719 kubelet[2377]: I1112 20:49:02.603745 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604026 kubelet[2377]: I1112 20:49:02.603785 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604026 kubelet[2377]: I1112 20:49:02.603824 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604026 kubelet[2377]: I1112 20:49:02.603859 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604026 kubelet[2377]: I1112 20:49:02.603900 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22d9f3f3e31419233eeaf8434e20308d-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-5-c2b3883be7\" (UID: \"22d9f3f3e31419233eeaf8434e20308d\") " pod="kube-system/kube-scheduler-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604026 kubelet[2377]: I1112 20:49:02.603932 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.604274 kubelet[2377]: I1112 20:49:02.603974 2377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.606190 kubelet[2377]: I1112 20:49:02.605961 2377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.606501 kubelet[2377]: E1112 20:49:02.606470 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.88.26:6443/api/v1/nodes\": dial tcp 164.92.88.26:6443: connect: connection refused" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:02.744734 kubelet[2377]: E1112 20:49:02.744029 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:02.746382 containerd[1590]: time="2024-11-12T20:49:02.745963399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-5-c2b3883be7,Uid:143b944d8a938fefc742bfa1dfbd7bb3,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:02.747040 kubelet[2377]: E1112 20:49:02.746003 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:02.750364 kubelet[2377]: E1112 20:49:02.749986 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:02.750533 containerd[1590]: time="2024-11-12T20:49:02.750411356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-5-c2b3883be7,Uid:22d9f3f3e31419233eeaf8434e20308d,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:02.750979 containerd[1590]: time="2024-11-12T20:49:02.750933411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-5-c2b3883be7,Uid:a917e4d6fcf8cbbef5ed9938f08123fc,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:02.757273 systemd-resolved[1478]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 12 20:49:02.902283 kubelet[2377]: E1112 20:49:02.902237 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.88.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-5-c2b3883be7?timeout=10s\": dial tcp 164.92.88.26:6443: connect: connection refused" interval="800ms" Nov 12 20:49:03.008592 kubelet[2377]: I1112 20:49:03.008346 2377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:03.008941 kubelet[2377]: E1112 20:49:03.008906 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.88.26:6443/api/v1/nodes\": dial tcp 164.92.88.26:6443: connect: connection refused" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:03.222937 kubelet[2377]: W1112 20:49:03.222826 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://164.92.88.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.222937 kubelet[2377]: E1112 20:49:03.222932 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.88.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.303888 kubelet[2377]: W1112 20:49:03.303714 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://164.92.88.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-5-c2b3883be7&limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.303888 kubelet[2377]: E1112 20:49:03.303808 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.88.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-5-c2b3883be7&limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.409154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118800682.mount: Deactivated successfully. Nov 12 20:49:03.444271 containerd[1590]: time="2024-11-12T20:49:03.444189821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:03.449759 containerd[1590]: time="2024-11-12T20:49:03.449661672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:03.452225 containerd[1590]: time="2024-11-12T20:49:03.452129423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:49:03.455840 containerd[1590]: time="2024-11-12T20:49:03.455728310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:49:03.459299 containerd[1590]: time="2024-11-12T20:49:03.459213891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:03.465720 containerd[1590]: time="2024-11-12T20:49:03.464926751Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:03.470090 containerd[1590]: time="2024-11-12T20:49:03.470013434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:49:03.476875 containerd[1590]: time="2024-11-12T20:49:03.476797939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:49:03.478391 containerd[1590]: time="2024-11-12T20:49:03.478325311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 732.2388ms" Nov 12 20:49:03.482859 containerd[1590]: time="2024-11-12T20:49:03.482668971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 731.666183ms" Nov 12 20:49:03.484710 containerd[1590]: time="2024-11-12T20:49:03.484630284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 734.101429ms" Nov 12 20:49:03.631947 kubelet[2377]: W1112 20:49:03.631746 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://164.92.88.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.631947 kubelet[2377]: E1112 20:49:03.631813 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.88.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.704178 kubelet[2377]: E1112 20:49:03.704119 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.88.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-5-c2b3883be7?timeout=10s\": dial tcp 164.92.88.26:6443: connect: connection refused" interval="1.6s" Nov 12 20:49:03.735608 kubelet[2377]: W1112 20:49:03.735511 2377 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://164.92.88.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.735608 kubelet[2377]: E1112 20:49:03.735613 2377 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.88.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:03.764791 containerd[1590]: time="2024-11-12T20:49:03.764060502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:03.764791 containerd[1590]: time="2024-11-12T20:49:03.764137516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:03.764791 containerd[1590]: time="2024-11-12T20:49:03.764154112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.764791 containerd[1590]: time="2024-11-12T20:49:03.764269471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.768791 containerd[1590]: time="2024-11-12T20:49:03.768361548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:03.768791 containerd[1590]: time="2024-11-12T20:49:03.768454216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:03.768791 containerd[1590]: time="2024-11-12T20:49:03.768489184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.768791 containerd[1590]: time="2024-11-12T20:49:03.768639818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.769405 containerd[1590]: time="2024-11-12T20:49:03.769072286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:03.769405 containerd[1590]: time="2024-11-12T20:49:03.769145405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:03.769405 containerd[1590]: time="2024-11-12T20:49:03.769178261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.769674 containerd[1590]: time="2024-11-12T20:49:03.769615673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:03.813498 kubelet[2377]: I1112 20:49:03.813454 2377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:03.814635 kubelet[2377]: E1112 20:49:03.813917 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.88.26:6443/api/v1/nodes\": dial tcp 164.92.88.26:6443: connect: connection refused" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:03.906973 containerd[1590]: time="2024-11-12T20:49:03.906894821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-5-c2b3883be7,Uid:a917e4d6fcf8cbbef5ed9938f08123fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b198e4114f4e686d4da3b191f89539c8475d45b71db6c2a53ea6fe0b7da5bc9\"" Nov 12 20:49:03.913016 kubelet[2377]: E1112 20:49:03.912970 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:03.926219 containerd[1590]: time="2024-11-12T20:49:03.925992499Z" level=info msg="CreateContainer within sandbox \"8b198e4114f4e686d4da3b191f89539c8475d45b71db6c2a53ea6fe0b7da5bc9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:49:03.930506 containerd[1590]: time="2024-11-12T20:49:03.930178328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-5-c2b3883be7,Uid:143b944d8a938fefc742bfa1dfbd7bb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf4e83ce52e885c5b4b50ba5e7044855ec8ed1a0b8318a3c1675467273d6eaba\"" Nov 12 20:49:03.939645 kubelet[2377]: E1112 20:49:03.939589 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:03.946014 containerd[1590]: time="2024-11-12T20:49:03.945709539Z" level=info msg="CreateContainer within sandbox \"bf4e83ce52e885c5b4b50ba5e7044855ec8ed1a0b8318a3c1675467273d6eaba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:49:03.947588 containerd[1590]: time="2024-11-12T20:49:03.947307082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-5-c2b3883be7,Uid:22d9f3f3e31419233eeaf8434e20308d,Namespace:kube-system,Attempt:0,} returns sandbox id \"334f3705bed7d360484b0ab94bd4e244f11b809a12a981f1914c640621628f74\"" Nov 12 20:49:03.949251 kubelet[2377]: E1112 20:49:03.948165 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:03.950663 containerd[1590]: time="2024-11-12T20:49:03.950616728Z" level=info msg="CreateContainer within sandbox \"334f3705bed7d360484b0ab94bd4e244f11b809a12a981f1914c640621628f74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:49:03.993565 containerd[1590]: time="2024-11-12T20:49:03.993486215Z" level=info msg="CreateContainer within sandbox \"8b198e4114f4e686d4da3b191f89539c8475d45b71db6c2a53ea6fe0b7da5bc9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54b323e3cc560fb0428e7d1eb756c907e3b5edbefffe7c0b8a1899136a1ab204\"" Nov 12 20:49:03.994728 containerd[1590]: time="2024-11-12T20:49:03.994649104Z" level=info msg="StartContainer for \"54b323e3cc560fb0428e7d1eb756c907e3b5edbefffe7c0b8a1899136a1ab204\"" Nov 12 20:49:04.009635 containerd[1590]: time="2024-11-12T20:49:04.009555052Z" level=info msg="CreateContainer within sandbox \"334f3705bed7d360484b0ab94bd4e244f11b809a12a981f1914c640621628f74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8cc4caf748a7afa22c0c07e8a15cf2cc71ed8e515178bb24bdf613ff79b0772b\"" Nov 12 20:49:04.012888 containerd[1590]: time="2024-11-12T20:49:04.011303309Z" level=info msg="StartContainer for \"8cc4caf748a7afa22c0c07e8a15cf2cc71ed8e515178bb24bdf613ff79b0772b\"" Nov 12 20:49:04.020254 containerd[1590]: time="2024-11-12T20:49:04.020190655Z" level=info msg="CreateContainer within sandbox \"bf4e83ce52e885c5b4b50ba5e7044855ec8ed1a0b8318a3c1675467273d6eaba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"249d6e5da7c645d3469055d8b9aaf7b088f1a1be1329b75808770a44fa9e962a\"" Nov 12 20:49:04.021548 containerd[1590]: time="2024-11-12T20:49:04.021495563Z" level=info msg="StartContainer for \"249d6e5da7c645d3469055d8b9aaf7b088f1a1be1329b75808770a44fa9e962a\"" Nov 12 20:49:04.160852 containerd[1590]: time="2024-11-12T20:49:04.160741619Z" level=info msg="StartContainer for \"54b323e3cc560fb0428e7d1eb756c907e3b5edbefffe7c0b8a1899136a1ab204\" returns successfully" Nov 12 20:49:04.191252 containerd[1590]: time="2024-11-12T20:49:04.191189253Z" level=info msg="StartContainer for \"8cc4caf748a7afa22c0c07e8a15cf2cc71ed8e515178bb24bdf613ff79b0772b\" returns successfully" Nov 12 20:49:04.204192 containerd[1590]: time="2024-11-12T20:49:04.204128735Z" level=info msg="StartContainer for \"249d6e5da7c645d3469055d8b9aaf7b088f1a1be1329b75808770a44fa9e962a\" returns successfully" Nov 12 20:49:04.248347 kubelet[2377]: E1112 20:49:04.248299 2377 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.88.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.88.26:6443: connect: connection refused Nov 12 20:49:04.382173 kubelet[2377]: E1112 20:49:04.382123 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:04.401750 kubelet[2377]: E1112 20:49:04.396002 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:04.401750 kubelet[2377]: E1112 20:49:04.401051 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:05.402250 kubelet[2377]: E1112 20:49:05.402206 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:05.418469 kubelet[2377]: I1112 20:49:05.416037 2377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:06.402035 kubelet[2377]: E1112 20:49:06.401998 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:07.291293 kubelet[2377]: I1112 20:49:07.291241 2377 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:07.429968 kubelet[2377]: E1112 20:49:07.429899 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 12 20:49:08.266130 kubelet[2377]: I1112 20:49:08.266038 2377 apiserver.go:52] "Watching apiserver" Nov 12 20:49:08.299371 kubelet[2377]: I1112 20:49:08.299274 2377 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:49:11.030244 kubelet[2377]: W1112 20:49:11.029198 2377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:49:11.030244 kubelet[2377]: E1112 20:49:11.030091 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:11.120655 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Nov 12 20:49:11.121086 systemd[1]: Reloading... Nov 12 20:49:11.281325 zram_generator::config[2692]: No configuration found. Nov 12 20:49:11.413829 kubelet[2377]: E1112 20:49:11.413782 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:11.466417 kubelet[2377]: W1112 20:49:11.466369 2377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:49:11.467762 kubelet[2377]: E1112 20:49:11.467694 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:11.557309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:49:11.741910 systemd[1]: Reloading finished in 617 ms. Nov 12 20:49:11.796579 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:11.798841 kubelet[2377]: I1112 20:49:11.798144 2377 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:49:11.816349 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:49:11.818136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:11.827190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:49:12.061019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:49:12.079497 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:49:12.190714 kubelet[2753]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:12.190714 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:49:12.190714 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:49:12.190714 kubelet[2753]: I1112 20:49:12.188821 2753 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:49:12.206561 kubelet[2753]: I1112 20:49:12.206517 2753 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:49:12.206839 kubelet[2753]: I1112 20:49:12.206804 2753 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:49:12.208505 kubelet[2753]: I1112 20:49:12.208470 2753 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:49:12.211094 kubelet[2753]: I1112 20:49:12.211020 2753 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:49:12.218981 kubelet[2753]: I1112 20:49:12.218889 2753 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:49:12.237479 kubelet[2753]: I1112 20:49:12.236106 2753 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:49:12.237479 kubelet[2753]: I1112 20:49:12.237123 2753 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:49:12.237479 kubelet[2753]: I1112 20:49:12.237387 2753 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:49:12.237479 kubelet[2753]: I1112 20:49:12.237465 2753 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:49:12.237479 kubelet[2753]: I1112 20:49:12.237485 2753 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:49:12.237978 kubelet[2753]: I1112 20:49:12.237533 2753 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:12.237978 kubelet[2753]: I1112 20:49:12.237674 2753 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:49:12.237978 kubelet[2753]: I1112 20:49:12.237807 2753 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:49:12.237978 kubelet[2753]: I1112 20:49:12.237847 2753 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:49:12.237978 kubelet[2753]: I1112 20:49:12.237881 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:49:12.241706 kubelet[2753]: I1112 20:49:12.241570 2753 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:49:12.242371 kubelet[2753]: I1112 20:49:12.242042 2753 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:49:12.242847 kubelet[2753]: I1112 20:49:12.242779 2753 server.go:1256] "Started kubelet" Nov 12 20:49:12.251460 kubelet[2753]: I1112 20:49:12.251311 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:49:12.263993 kubelet[2753]: I1112 20:49:12.263945 2753 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:49:12.266104 kubelet[2753]: I1112 20:49:12.266060 2753 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:49:12.270305 kubelet[2753]: I1112 20:49:12.270251 2753 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:49:12.270771 kubelet[2753]: I1112 20:49:12.270639 2753 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:49:12.294112 kubelet[2753]: I1112 20:49:12.293751 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:49:12.301374 kubelet[2753]: I1112 20:49:12.300839 2753 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:49:12.304703 kubelet[2753]: I1112 20:49:12.304508 2753 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:49:12.304703 kubelet[2753]: I1112 20:49:12.304562 2753 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:49:12.304703 kubelet[2753]: I1112 20:49:12.304586 2753 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:49:12.304703 kubelet[2753]: E1112 20:49:12.304669 2753 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:49:12.323729 kubelet[2753]: I1112 20:49:12.317444 2753 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:49:12.323729 kubelet[2753]: I1112 20:49:12.318840 2753 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:49:12.328353 kubelet[2753]: I1112 20:49:12.328305 2753 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:49:12.328566 kubelet[2753]: I1112 20:49:12.328472 2753 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:49:12.335137 kubelet[2753]: I1112 20:49:12.333712 2753 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:49:12.336418 kubelet[2753]: E1112 20:49:12.336378 2753 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:49:12.376266 sudo[2781]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 12 20:49:12.376854 sudo[2781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 12 20:49:12.403607 kubelet[2753]: I1112 20:49:12.403558 2753 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.405067 kubelet[2753]: E1112 20:49:12.404991 2753 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:49:12.421984 kubelet[2753]: I1112 20:49:12.421938 2753 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.422427 kubelet[2753]: I1112 20:49:12.422403 2753 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.493674 kubelet[2753]: I1112 20:49:12.493622 2753 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:49:12.494524 kubelet[2753]: I1112 20:49:12.494111 2753 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:49:12.494524 kubelet[2753]: I1112 20:49:12.494152 2753 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:49:12.494524 kubelet[2753]: I1112 20:49:12.494380 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:49:12.494524 kubelet[2753]: I1112 20:49:12.494414 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:49:12.494524 kubelet[2753]: I1112 20:49:12.494428 2753 policy_none.go:49] "None policy: Start" Nov 12 20:49:12.498745 kubelet[2753]: I1112 20:49:12.497095 2753 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:49:12.498745 kubelet[2753]: I1112 20:49:12.497148 2753 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:49:12.498745 kubelet[2753]: I1112 20:49:12.497503 2753 state_mem.go:75] "Updated machine memory state" Nov 12 20:49:12.499856 kubelet[2753]: I1112 20:49:12.499825 2753 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:49:12.501810 kubelet[2753]: I1112 20:49:12.501332 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:49:12.606818 kubelet[2753]: I1112 20:49:12.605671 2753 topology_manager.go:215] "Topology Admit Handler" podUID="a917e4d6fcf8cbbef5ed9938f08123fc" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.606981 kubelet[2753]: I1112 20:49:12.606839 2753 topology_manager.go:215] "Topology Admit Handler" podUID="143b944d8a938fefc742bfa1dfbd7bb3" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.606981 kubelet[2753]: I1112 20:49:12.606906 2753 topology_manager.go:215] "Topology Admit Handler" podUID="22d9f3f3e31419233eeaf8434e20308d" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.620381 kubelet[2753]: I1112 20:49:12.620331 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621217 kubelet[2753]: I1112 20:49:12.621186 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621449 kubelet[2753]: I1112 20:49:12.621394 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621585 kubelet[2753]: I1112 20:49:12.621572 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621729 kubelet[2753]: I1112 20:49:12.621715 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22d9f3f3e31419233eeaf8434e20308d-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-5-c2b3883be7\" (UID: \"22d9f3f3e31419233eeaf8434e20308d\") " pod="kube-system/kube-scheduler-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621845 kubelet[2753]: I1112 20:49:12.621835 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.621962 kubelet[2753]: I1112 20:49:12.621953 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a917e4d6fcf8cbbef5ed9938f08123fc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-5-c2b3883be7\" (UID: \"a917e4d6fcf8cbbef5ed9938f08123fc\") " pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.622142 kubelet[2753]: I1112 20:49:12.622098 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.622241 kubelet[2753]: I1112 20:49:12.622217 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/143b944d8a938fefc742bfa1dfbd7bb3-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" (UID: \"143b944d8a938fefc742bfa1dfbd7bb3\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.625537 kubelet[2753]: W1112 20:49:12.625493 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:49:12.625828 kubelet[2753]: E1112 20:49:12.625644 2753 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.0-5-c2b3883be7\" already exists" pod="kube-system/kube-scheduler-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.629031 kubelet[2753]: W1112 20:49:12.628986 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:49:12.629564 kubelet[2753]: E1112 20:49:12.629526 2753 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.0-5-c2b3883be7\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" Nov 12 20:49:12.630012 kubelet[2753]: W1112 20:49:12.629141 2753 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:49:12.928718 kubelet[2753]: E1112 20:49:12.928427 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:12.931778 kubelet[2753]: E1112 20:49:12.931708 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:12.935541 kubelet[2753]: E1112 20:49:12.935424 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:13.240904 kubelet[2753]: I1112 20:49:13.240406 2753 apiserver.go:52] "Watching apiserver" Nov 12 20:49:13.309882 sudo[2781]: pam_unix(sudo:session): session closed for user root Nov 12 20:49:13.318335 kubelet[2753]: I1112 20:49:13.318280 2753 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:49:13.371715 kubelet[2753]: E1112 20:49:13.371182 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:13.374073 kubelet[2753]: E1112 20:49:13.374021 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:13.377860 kubelet[2753]: E1112 20:49:13.377769 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:13.477484 kubelet[2753]: I1112 20:49:13.476261 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-5-c2b3883be7" podStartSLOduration=2.476205697 podStartE2EDuration="2.476205697s" podCreationTimestamp="2024-11-12 20:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:13.461880905 +0000 UTC m=+1.371516124" watchObservedRunningTime="2024-11-12 20:49:13.476205697 +0000 UTC m=+1.385840857" Nov 12 20:49:13.517965 kubelet[2753]: I1112 20:49:13.516909 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-5-c2b3883be7" podStartSLOduration=1.5168491830000002 podStartE2EDuration="1.516849183s" podCreationTimestamp="2024-11-12 20:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:13.486061838 +0000 UTC m=+1.395697022" watchObservedRunningTime="2024-11-12 20:49:13.516849183 +0000 UTC m=+1.426484359" Nov 12 20:49:13.549517 kubelet[2753]: I1112 20:49:13.549467 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-5-c2b3883be7" podStartSLOduration=2.549303781 podStartE2EDuration="2.549303781s" podCreationTimestamp="2024-11-12 20:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:13.51906655 +0000 UTC m=+1.428701733" watchObservedRunningTime="2024-11-12 20:49:13.549303781 +0000 UTC m=+1.458938988" Nov 12 20:49:14.373916 kubelet[2753]: E1112 20:49:14.373824 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:15.531073 update_engine[1566]: I20241112 20:49:15.529455 1566 update_attempter.cc:509] Updating boot flags... Nov 12 20:49:15.596735 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2813) Nov 12 20:49:15.724398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2812) Nov 12 20:49:15.838316 sudo[1808]: pam_unix(sudo:session): session closed for user root Nov 12 20:49:15.843956 sshd[1801]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:15.848158 systemd[1]: sshd@6-164.92.88.26:22-139.178.68.195:56484.service: Deactivated successfully. Nov 12 20:49:15.855002 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:49:15.856584 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:49:15.860297 systemd-logind[1560]: Removed session 7. Nov 12 20:49:20.577792 kubelet[2753]: E1112 20:49:20.577452 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:20.972326 kubelet[2753]: E1112 20:49:20.971361 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:21.387380 kubelet[2753]: E1112 20:49:21.386760 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:21.389139 kubelet[2753]: E1112 20:49:21.389098 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:22.241313 kubelet[2753]: E1112 20:49:22.238191 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:22.389729 kubelet[2753]: E1112 20:49:22.389664 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:22.391468 kubelet[2753]: E1112 20:49:22.390421 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:23.448500 kubelet[2753]: I1112 20:49:23.447766 2753 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:49:23.452217 containerd[1590]: time="2024-11-12T20:49:23.451083166Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:49:23.452922 kubelet[2753]: I1112 20:49:23.451451 2753 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:49:24.340709 kubelet[2753]: I1112 20:49:24.338049 2753 topology_manager.go:215] "Topology Admit Handler" podUID="27e8c9de-f80f-47ca-b5fd-8ac59695c8b6" podNamespace="kube-system" podName="kube-proxy-scksw" Nov 12 20:49:24.357192 kubelet[2753]: I1112 20:49:24.357008 2753 topology_manager.go:215] "Topology Admit Handler" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" podNamespace="kube-system" podName="cilium-m8zcb" Nov 12 20:49:24.409949 kubelet[2753]: I1112 20:49:24.409671 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27e8c9de-f80f-47ca-b5fd-8ac59695c8b6-lib-modules\") pod \"kube-proxy-scksw\" (UID: \"27e8c9de-f80f-47ca-b5fd-8ac59695c8b6\") " pod="kube-system/kube-proxy-scksw" Nov 12 20:49:24.409949 kubelet[2753]: I1112 20:49:24.409870 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c40d5f72-9e2a-4488-b81a-b0941c030539-clustermesh-secrets\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.410669 kubelet[2753]: I1112 20:49:24.410457 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-xtables-lock\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.410669 kubelet[2753]: I1112 20:49:24.410626 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-config-path\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.411211 kubelet[2753]: I1112 20:49:24.410990 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4z6\" (UniqueName: \"kubernetes.io/projected/27e8c9de-f80f-47ca-b5fd-8ac59695c8b6-kube-api-access-mc4z6\") pod \"kube-proxy-scksw\" (UID: \"27e8c9de-f80f-47ca-b5fd-8ac59695c8b6\") " pod="kube-system/kube-proxy-scksw" Nov 12 20:49:24.411211 kubelet[2753]: I1112 20:49:24.411098 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-bpf-maps\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.411641 kubelet[2753]: I1112 20:49:24.411185 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-hubble-tls\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.411641 kubelet[2753]: I1112 20:49:24.411502 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkzs\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-kube-api-access-qdkzs\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.411641 kubelet[2753]: I1112 20:49:24.411570 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27e8c9de-f80f-47ca-b5fd-8ac59695c8b6-xtables-lock\") pod \"kube-proxy-scksw\" (UID: \"27e8c9de-f80f-47ca-b5fd-8ac59695c8b6\") " pod="kube-system/kube-proxy-scksw" Nov 12 20:49:24.411641 kubelet[2753]: I1112 20:49:24.411608 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-hostproc\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412037 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-cgroup\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412115 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-etc-cni-netd\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412170 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-lib-modules\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412202 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-kernel\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412264 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27e8c9de-f80f-47ca-b5fd-8ac59695c8b6-kube-proxy\") pod \"kube-proxy-scksw\" (UID: \"27e8c9de-f80f-47ca-b5fd-8ac59695c8b6\") " pod="kube-system/kube-proxy-scksw" Nov 12 20:49:24.412381 kubelet[2753]: I1112 20:49:24.412313 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-run\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412923 kubelet[2753]: I1112 20:49:24.412467 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cni-path\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.412923 kubelet[2753]: I1112 20:49:24.412547 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-net\") pod \"cilium-m8zcb\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " pod="kube-system/cilium-m8zcb" Nov 12 20:49:24.532772 kubelet[2753]: I1112 20:49:24.527486 2753 topology_manager.go:215] "Topology Admit Handler" podUID="d712e675-c127-4723-a5c9-f628a70bc782" podNamespace="kube-system" podName="cilium-operator-5cc964979-pmzqz" Nov 12 20:49:24.628814 kubelet[2753]: I1112 20:49:24.619881 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6nsz\" (UniqueName: \"kubernetes.io/projected/d712e675-c127-4723-a5c9-f628a70bc782-kube-api-access-w6nsz\") pod \"cilium-operator-5cc964979-pmzqz\" (UID: \"d712e675-c127-4723-a5c9-f628a70bc782\") " pod="kube-system/cilium-operator-5cc964979-pmzqz" Nov 12 20:49:24.628814 kubelet[2753]: I1112 20:49:24.619954 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d712e675-c127-4723-a5c9-f628a70bc782-cilium-config-path\") pod \"cilium-operator-5cc964979-pmzqz\" (UID: \"d712e675-c127-4723-a5c9-f628a70bc782\") " pod="kube-system/cilium-operator-5cc964979-pmzqz" Nov 12 20:49:24.653733 kubelet[2753]: E1112 20:49:24.652994 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:24.658732 containerd[1590]: time="2024-11-12T20:49:24.657660470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scksw,Uid:27e8c9de-f80f-47ca-b5fd-8ac59695c8b6,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:24.669464 kubelet[2753]: E1112 20:49:24.669401 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:24.671546 containerd[1590]: time="2024-11-12T20:49:24.670274901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8zcb,Uid:c40d5f72-9e2a-4488-b81a-b0941c030539,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:24.817278 containerd[1590]: time="2024-11-12T20:49:24.816851400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:24.817278 containerd[1590]: time="2024-11-12T20:49:24.816933052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:24.817278 containerd[1590]: time="2024-11-12T20:49:24.816949161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.817278 containerd[1590]: time="2024-11-12T20:49:24.817078299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.835819 containerd[1590]: time="2024-11-12T20:49:24.835372337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:24.835819 containerd[1590]: time="2024-11-12T20:49:24.835490255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:24.838059 containerd[1590]: time="2024-11-12T20:49:24.836119177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.838059 containerd[1590]: time="2024-11-12T20:49:24.836775771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.863095 kubelet[2753]: E1112 20:49:24.862923 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:24.866105 containerd[1590]: time="2024-11-12T20:49:24.866058337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pmzqz,Uid:d712e675-c127-4723-a5c9-f628a70bc782,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:24.899588 containerd[1590]: time="2024-11-12T20:49:24.899540415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scksw,Uid:27e8c9de-f80f-47ca-b5fd-8ac59695c8b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c2cd5e3d6b02a9be92f220f373edae3f35f32efa5a9d7bae4ad67f93771c6dc\"" Nov 12 20:49:24.901225 kubelet[2753]: E1112 20:49:24.901182 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:24.906065 containerd[1590]: time="2024-11-12T20:49:24.905643691Z" level=info msg="CreateContainer within sandbox \"6c2cd5e3d6b02a9be92f220f373edae3f35f32efa5a9d7bae4ad67f93771c6dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:49:24.934345 containerd[1590]: time="2024-11-12T20:49:24.934057137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8zcb,Uid:c40d5f72-9e2a-4488-b81a-b0941c030539,Namespace:kube-system,Attempt:0,} returns sandbox id \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\"" Nov 12 20:49:24.937020 kubelet[2753]: E1112 20:49:24.935153 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:24.953100 containerd[1590]: time="2024-11-12T20:49:24.952956378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:24.953578 containerd[1590]: time="2024-11-12T20:49:24.953541748Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 12 20:49:24.954265 containerd[1590]: time="2024-11-12T20:49:24.954017148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:24.954265 containerd[1590]: time="2024-11-12T20:49:24.954057327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.954265 containerd[1590]: time="2024-11-12T20:49:24.954209593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:24.977056 containerd[1590]: time="2024-11-12T20:49:24.976979700Z" level=info msg="CreateContainer within sandbox \"6c2cd5e3d6b02a9be92f220f373edae3f35f32efa5a9d7bae4ad67f93771c6dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e09309f25f129a19cfa5acb51568fb3a209b0ce03f10e9dd1cc14f5b59d3767\"" Nov 12 20:49:24.984251 containerd[1590]: time="2024-11-12T20:49:24.984084777Z" level=info msg="StartContainer for \"3e09309f25f129a19cfa5acb51568fb3a209b0ce03f10e9dd1cc14f5b59d3767\"" Nov 12 20:49:25.049890 containerd[1590]: time="2024-11-12T20:49:25.049812874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pmzqz,Uid:d712e675-c127-4723-a5c9-f628a70bc782,Namespace:kube-system,Attempt:0,} returns sandbox id \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\"" Nov 12 20:49:25.051812 kubelet[2753]: E1112 20:49:25.050769 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:25.129784 containerd[1590]: time="2024-11-12T20:49:25.129731471Z" level=info msg="StartContainer for \"3e09309f25f129a19cfa5acb51568fb3a209b0ce03f10e9dd1cc14f5b59d3767\" returns successfully" Nov 12 20:49:25.422384 kubelet[2753]: E1112 20:49:25.422244 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:30.401306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2507987136.mount: Deactivated successfully. Nov 12 20:49:33.273780 containerd[1590]: time="2024-11-12T20:49:33.273643971Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:33.288031 containerd[1590]: time="2024-11-12T20:49:33.287925172Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735287" Nov 12 20:49:33.291598 containerd[1590]: time="2024-11-12T20:49:33.291491734Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:33.296348 containerd[1590]: time="2024-11-12T20:49:33.296010123Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.342299471s" Nov 12 20:49:33.296348 containerd[1590]: time="2024-11-12T20:49:33.296090169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 12 20:49:33.302138 containerd[1590]: time="2024-11-12T20:49:33.301872593Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:49:33.304619 containerd[1590]: time="2024-11-12T20:49:33.304277752Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 12 20:49:33.444542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922041849.mount: Deactivated successfully. Nov 12 20:49:33.452845 containerd[1590]: time="2024-11-12T20:49:33.452572766Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\"" Nov 12 20:49:33.456329 containerd[1590]: time="2024-11-12T20:49:33.455950789Z" level=info msg="StartContainer for \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\"" Nov 12 20:49:33.603668 containerd[1590]: time="2024-11-12T20:49:33.603489668Z" level=info msg="StartContainer for \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\" returns successfully" Nov 12 20:49:33.725933 containerd[1590]: time="2024-11-12T20:49:33.705702514Z" level=info msg="shim disconnected" id=e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6 namespace=k8s.io Nov 12 20:49:33.725933 containerd[1590]: time="2024-11-12T20:49:33.725928459Z" level=warning msg="cleaning up after shim disconnected" id=e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6 namespace=k8s.io Nov 12 20:49:33.726324 containerd[1590]: time="2024-11-12T20:49:33.725952902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:34.433798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6-rootfs.mount: Deactivated successfully. Nov 12 20:49:34.485851 kubelet[2753]: E1112 20:49:34.484831 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:34.494220 containerd[1590]: time="2024-11-12T20:49:34.493995135Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:49:34.530221 kubelet[2753]: I1112 20:49:34.530172 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-scksw" podStartSLOduration=10.53012059 podStartE2EDuration="10.53012059s" podCreationTimestamp="2024-11-12 20:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:25.433460496 +0000 UTC m=+13.343095694" watchObservedRunningTime="2024-11-12 20:49:34.53012059 +0000 UTC m=+22.439755770" Nov 12 20:49:34.540448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13702344.mount: Deactivated successfully. Nov 12 20:49:34.551476 containerd[1590]: time="2024-11-12T20:49:34.551403251Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\"" Nov 12 20:49:34.552510 containerd[1590]: time="2024-11-12T20:49:34.552468686Z" level=info msg="StartContainer for \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\"" Nov 12 20:49:34.667940 containerd[1590]: time="2024-11-12T20:49:34.667875233Z" level=info msg="StartContainer for \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\" returns successfully" Nov 12 20:49:34.688896 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:49:34.690008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:49:34.690120 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:49:34.700067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:49:34.744632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:49:34.751715 containerd[1590]: time="2024-11-12T20:49:34.751386232Z" level=info msg="shim disconnected" id=9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45 namespace=k8s.io Nov 12 20:49:34.751715 containerd[1590]: time="2024-11-12T20:49:34.751459627Z" level=warning msg="cleaning up after shim disconnected" id=9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45 namespace=k8s.io Nov 12 20:49:34.751715 containerd[1590]: time="2024-11-12T20:49:34.751475758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:34.775241 containerd[1590]: time="2024-11-12T20:49:34.775162107Z" level=warning msg="cleanup warnings time=\"2024-11-12T20:49:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 20:49:35.433887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45-rootfs.mount: Deactivated successfully. Nov 12 20:49:35.490329 kubelet[2753]: E1112 20:49:35.489903 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:35.498412 containerd[1590]: time="2024-11-12T20:49:35.497875927Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:49:35.588312 containerd[1590]: time="2024-11-12T20:49:35.588075320Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\"" Nov 12 20:49:35.589192 containerd[1590]: time="2024-11-12T20:49:35.588967794Z" level=info msg="StartContainer for \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\"" Nov 12 20:49:35.616884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742047142.mount: Deactivated successfully. Nov 12 20:49:35.730799 containerd[1590]: time="2024-11-12T20:49:35.730413156Z" level=info msg="StartContainer for \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\" returns successfully" Nov 12 20:49:35.793642 containerd[1590]: time="2024-11-12T20:49:35.793373651Z" level=info msg="shim disconnected" id=d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606 namespace=k8s.io Nov 12 20:49:35.793642 containerd[1590]: time="2024-11-12T20:49:35.793493540Z" level=warning msg="cleaning up after shim disconnected" id=d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606 namespace=k8s.io Nov 12 20:49:35.793642 containerd[1590]: time="2024-11-12T20:49:35.793508191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:36.445438 containerd[1590]: time="2024-11-12T20:49:36.445359554Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:36.448655 containerd[1590]: time="2024-11-12T20:49:36.448574821Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Nov 12 20:49:36.452221 containerd[1590]: time="2024-11-12T20:49:36.452094350Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:49:36.456000 containerd[1590]: time="2024-11-12T20:49:36.454946304Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.15054258s" Nov 12 20:49:36.456000 containerd[1590]: time="2024-11-12T20:49:36.455013979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 12 20:49:36.460584 containerd[1590]: time="2024-11-12T20:49:36.460538275Z" level=info msg="CreateContainer within sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 12 20:49:36.495420 containerd[1590]: time="2024-11-12T20:49:36.495340038Z" level=info msg="CreateContainer within sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\"" Nov 12 20:49:36.497296 containerd[1590]: time="2024-11-12T20:49:36.496263170Z" level=info msg="StartContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\"" Nov 12 20:49:36.497528 kubelet[2753]: E1112 20:49:36.496762 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:36.503952 containerd[1590]: time="2024-11-12T20:49:36.503418967Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:49:36.565604 containerd[1590]: time="2024-11-12T20:49:36.563222728Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\"" Nov 12 20:49:36.574694 containerd[1590]: time="2024-11-12T20:49:36.574087216Z" level=info msg="StartContainer for \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\"" Nov 12 20:49:36.589749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131614492.mount: Deactivated successfully. Nov 12 20:49:36.688558 containerd[1590]: time="2024-11-12T20:49:36.688500756Z" level=info msg="StartContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" returns successfully" Nov 12 20:49:36.720009 containerd[1590]: time="2024-11-12T20:49:36.719571447Z" level=info msg="StartContainer for \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\" returns successfully" Nov 12 20:49:36.767831 containerd[1590]: time="2024-11-12T20:49:36.767730415Z" level=info msg="shim disconnected" id=3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548 namespace=k8s.io Nov 12 20:49:36.767831 containerd[1590]: time="2024-11-12T20:49:36.767820580Z" level=warning msg="cleaning up after shim disconnected" id=3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548 namespace=k8s.io Nov 12 20:49:36.768170 containerd[1590]: time="2024-11-12T20:49:36.767835158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:49:37.506285 kubelet[2753]: E1112 20:49:37.506251 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:37.513591 kubelet[2753]: E1112 20:49:37.513553 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:37.516249 containerd[1590]: time="2024-11-12T20:49:37.516201617Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:49:37.575714 containerd[1590]: time="2024-11-12T20:49:37.572186337Z" level=info msg="CreateContainer within sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\"" Nov 12 20:49:37.578198 containerd[1590]: time="2024-11-12T20:49:37.578080327Z" level=info msg="StartContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\"" Nov 12 20:49:37.758646 containerd[1590]: time="2024-11-12T20:49:37.758198863Z" level=info msg="StartContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" returns successfully" Nov 12 20:49:38.220783 kubelet[2753]: I1112 20:49:38.220619 2753 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:49:38.289112 kubelet[2753]: I1112 20:49:38.288971 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pmzqz" podStartSLOduration=2.888434055 podStartE2EDuration="14.288904017s" podCreationTimestamp="2024-11-12 20:49:24 +0000 UTC" firstStartedPulling="2024-11-12 20:49:25.054830051 +0000 UTC m=+12.964465212" lastFinishedPulling="2024-11-12 20:49:36.455300002 +0000 UTC m=+24.364935174" observedRunningTime="2024-11-12 20:49:37.743124673 +0000 UTC m=+25.652759859" watchObservedRunningTime="2024-11-12 20:49:38.288904017 +0000 UTC m=+26.198539209" Nov 12 20:49:38.289707 kubelet[2753]: I1112 20:49:38.289651 2753 topology_manager.go:215] "Topology Admit Handler" podUID="50f61b9b-e81f-44ad-936d-6537f405f5db" podNamespace="kube-system" podName="coredns-76f75df574-k4mw2" Nov 12 20:49:38.298717 kubelet[2753]: I1112 20:49:38.297984 2753 topology_manager.go:215] "Topology Admit Handler" podUID="17ce42f1-6bf7-4075-9e75-019eaa0ba319" podNamespace="kube-system" podName="coredns-76f75df574-fvssm" Nov 12 20:49:38.345168 kubelet[2753]: I1112 20:49:38.344973 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50f61b9b-e81f-44ad-936d-6537f405f5db-config-volume\") pod \"coredns-76f75df574-k4mw2\" (UID: \"50f61b9b-e81f-44ad-936d-6537f405f5db\") " pod="kube-system/coredns-76f75df574-k4mw2" Nov 12 20:49:38.345459 kubelet[2753]: I1112 20:49:38.345439 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88f9\" (UniqueName: \"kubernetes.io/projected/50f61b9b-e81f-44ad-936d-6537f405f5db-kube-api-access-p88f9\") pod \"coredns-76f75df574-k4mw2\" (UID: \"50f61b9b-e81f-44ad-936d-6537f405f5db\") " pod="kube-system/coredns-76f75df574-k4mw2" Nov 12 20:49:38.345628 kubelet[2753]: I1112 20:49:38.345600 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqsf\" (UniqueName: \"kubernetes.io/projected/17ce42f1-6bf7-4075-9e75-019eaa0ba319-kube-api-access-tqqsf\") pod \"coredns-76f75df574-fvssm\" (UID: \"17ce42f1-6bf7-4075-9e75-019eaa0ba319\") " pod="kube-system/coredns-76f75df574-fvssm" Nov 12 20:49:38.346262 kubelet[2753]: I1112 20:49:38.345740 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17ce42f1-6bf7-4075-9e75-019eaa0ba319-config-volume\") pod \"coredns-76f75df574-fvssm\" (UID: \"17ce42f1-6bf7-4075-9e75-019eaa0ba319\") " pod="kube-system/coredns-76f75df574-fvssm" Nov 12 20:49:38.441469 systemd[1]: run-containerd-runc-k8s.io-8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6-runc.gORL1m.mount: Deactivated successfully. Nov 12 20:49:38.529274 kubelet[2753]: E1112 20:49:38.529109 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:38.533487 kubelet[2753]: E1112 20:49:38.533380 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:38.602784 kubelet[2753]: E1112 20:49:38.602645 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:38.605164 containerd[1590]: time="2024-11-12T20:49:38.604580698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k4mw2,Uid:50f61b9b-e81f-44ad-936d-6537f405f5db,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:38.611248 kubelet[2753]: E1112 20:49:38.610571 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:38.612401 containerd[1590]: time="2024-11-12T20:49:38.612003078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fvssm,Uid:17ce42f1-6bf7-4075-9e75-019eaa0ba319,Namespace:kube-system,Attempt:0,}" Nov 12 20:49:39.531594 kubelet[2753]: E1112 20:49:39.531552 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:40.533738 kubelet[2753]: E1112 20:49:40.533528 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:40.721694 systemd-networkd[1223]: cilium_host: Link UP Nov 12 20:49:40.722448 systemd-networkd[1223]: cilium_net: Link UP Nov 12 20:49:40.722632 systemd-networkd[1223]: cilium_net: Gained carrier Nov 12 20:49:40.725165 systemd-networkd[1223]: cilium_host: Gained carrier Nov 12 20:49:40.725305 systemd-networkd[1223]: cilium_net: Gained IPv6LL Nov 12 20:49:40.725453 systemd-networkd[1223]: cilium_host: Gained IPv6LL Nov 12 20:49:40.884563 systemd-networkd[1223]: cilium_vxlan: Link UP Nov 12 20:49:40.884574 systemd-networkd[1223]: cilium_vxlan: Gained carrier Nov 12 20:49:41.686732 kernel: NET: Registered PF_ALG protocol family Nov 12 20:49:42.356848 systemd-networkd[1223]: cilium_vxlan: Gained IPv6LL Nov 12 20:49:42.760990 systemd-networkd[1223]: lxc_health: Link UP Nov 12 20:49:42.775003 systemd-networkd[1223]: lxc_health: Gained carrier Nov 12 20:49:43.267820 systemd-networkd[1223]: lxca0eb09ec470c: Link UP Nov 12 20:49:43.274029 kernel: eth0: renamed from tmp7c61d Nov 12 20:49:43.284604 systemd-networkd[1223]: lxca0eb09ec470c: Gained carrier Nov 12 20:49:43.346851 systemd-networkd[1223]: lxc59904b206bc4: Link UP Nov 12 20:49:43.361927 kernel: eth0: renamed from tmpd6f4d Nov 12 20:49:43.369409 systemd-networkd[1223]: lxc59904b206bc4: Gained carrier Nov 12 20:49:44.401977 systemd-networkd[1223]: lxca0eb09ec470c: Gained IPv6LL Nov 12 20:49:44.657932 systemd-networkd[1223]: lxc59904b206bc4: Gained IPv6LL Nov 12 20:49:44.658415 systemd-networkd[1223]: lxc_health: Gained IPv6LL Nov 12 20:49:44.674536 kubelet[2753]: E1112 20:49:44.673102 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:44.727350 kubelet[2753]: I1112 20:49:44.726104 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m8zcb" podStartSLOduration=12.366878108 podStartE2EDuration="20.726050335s" podCreationTimestamp="2024-11-12 20:49:24 +0000 UTC" firstStartedPulling="2024-11-12 20:49:24.93742464 +0000 UTC m=+12.847059812" lastFinishedPulling="2024-11-12 20:49:33.296596878 +0000 UTC m=+21.206232039" observedRunningTime="2024-11-12 20:49:38.575170019 +0000 UTC m=+26.484805199" watchObservedRunningTime="2024-11-12 20:49:44.726050335 +0000 UTC m=+32.635685494" Nov 12 20:49:45.555546 kubelet[2753]: E1112 20:49:45.554516 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:46.555568 kubelet[2753]: E1112 20:49:46.555488 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:49.266294 containerd[1590]: time="2024-11-12T20:49:49.266145743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:49.268455 containerd[1590]: time="2024-11-12T20:49:49.267540645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:49.268455 containerd[1590]: time="2024-11-12T20:49:49.267634333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.269002 containerd[1590]: time="2024-11-12T20:49:49.268196385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.323321 containerd[1590]: time="2024-11-12T20:49:49.322918048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:49:49.323321 containerd[1590]: time="2024-11-12T20:49:49.323014717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:49:49.324034 containerd[1590]: time="2024-11-12T20:49:49.323480165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.324650 containerd[1590]: time="2024-11-12T20:49:49.324086714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:49:49.479277 containerd[1590]: time="2024-11-12T20:49:49.479198684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fvssm,Uid:17ce42f1-6bf7-4075-9e75-019eaa0ba319,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c61da7c21c7c023dcf186e8e8d6b737faee56604c8331fa8296bf35d935aec9\"" Nov 12 20:49:49.480920 kubelet[2753]: E1112 20:49:49.480232 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:49.496295 containerd[1590]: time="2024-11-12T20:49:49.496228266Z" level=info msg="CreateContainer within sandbox \"7c61da7c21c7c023dcf186e8e8d6b737faee56604c8331fa8296bf35d935aec9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:49.531469 containerd[1590]: time="2024-11-12T20:49:49.531297883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-k4mw2,Uid:50f61b9b-e81f-44ad-936d-6537f405f5db,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6f4d90510bb28c9915073ba574c996b61d548ca291ece429ce4266c4287dfe0\"" Nov 12 20:49:49.532431 kubelet[2753]: E1112 20:49:49.532382 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:49.536412 containerd[1590]: time="2024-11-12T20:49:49.536356213Z" level=info msg="CreateContainer within sandbox \"d6f4d90510bb28c9915073ba574c996b61d548ca291ece429ce4266c4287dfe0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:49:49.579491 containerd[1590]: time="2024-11-12T20:49:49.579420889Z" level=info msg="CreateContainer within sandbox \"7c61da7c21c7c023dcf186e8e8d6b737faee56604c8331fa8296bf35d935aec9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"637d610795c4fa9eacda024363ec3ea6860957ba367aea2c34edfb656f631db2\"" Nov 12 20:49:49.580724 containerd[1590]: time="2024-11-12T20:49:49.580270803Z" level=info msg="StartContainer for \"637d610795c4fa9eacda024363ec3ea6860957ba367aea2c34edfb656f631db2\"" Nov 12 20:49:49.602423 containerd[1590]: time="2024-11-12T20:49:49.602233785Z" level=info msg="CreateContainer within sandbox \"d6f4d90510bb28c9915073ba574c996b61d548ca291ece429ce4266c4287dfe0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69b40f52041ad631f1646f90a52e146c35931026038217763c0517833c2e264c\"" Nov 12 20:49:49.607240 containerd[1590]: time="2024-11-12T20:49:49.604514509Z" level=info msg="StartContainer for \"69b40f52041ad631f1646f90a52e146c35931026038217763c0517833c2e264c\"" Nov 12 20:49:49.701111 containerd[1590]: time="2024-11-12T20:49:49.701060075Z" level=info msg="StartContainer for \"637d610795c4fa9eacda024363ec3ea6860957ba367aea2c34edfb656f631db2\" returns successfully" Nov 12 20:49:49.738553 containerd[1590]: time="2024-11-12T20:49:49.738484860Z" level=info msg="StartContainer for \"69b40f52041ad631f1646f90a52e146c35931026038217763c0517833c2e264c\" returns successfully" Nov 12 20:49:50.285298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962451646.mount: Deactivated successfully. Nov 12 20:49:50.581433 kubelet[2753]: E1112 20:49:50.579427 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:50.594504 kubelet[2753]: E1112 20:49:50.592294 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:50.612726 kubelet[2753]: I1112 20:49:50.612623 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fvssm" podStartSLOduration=26.612564368 podStartE2EDuration="26.612564368s" podCreationTimestamp="2024-11-12 20:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:50.611382566 +0000 UTC m=+38.521017750" watchObservedRunningTime="2024-11-12 20:49:50.612564368 +0000 UTC m=+38.522199550" Nov 12 20:49:50.670180 kubelet[2753]: I1112 20:49:50.668453 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-k4mw2" podStartSLOduration=26.668359544 podStartE2EDuration="26.668359544s" podCreationTimestamp="2024-11-12 20:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:49:50.640864668 +0000 UTC m=+38.550499849" watchObservedRunningTime="2024-11-12 20:49:50.668359544 +0000 UTC m=+38.577994728" Nov 12 20:49:51.594628 kubelet[2753]: E1112 20:49:51.594570 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:51.595989 kubelet[2753]: E1112 20:49:51.595926 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:52.597391 kubelet[2753]: E1112 20:49:52.597067 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:49:52.597391 kubelet[2753]: E1112 20:49:52.597270 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:04.516367 systemd[1]: Started sshd@7-164.92.88.26:22-139.178.68.195:39022.service - OpenSSH per-connection server daemon (139.178.68.195:39022). Nov 12 20:50:04.623379 sshd[4146]: Accepted publickey for core from 139.178.68.195 port 39022 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:04.626227 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:04.641208 systemd-logind[1560]: New session 8 of user core. Nov 12 20:50:04.647256 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:50:05.709886 sshd[4146]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:05.721334 systemd[1]: sshd@7-164.92.88.26:22-139.178.68.195:39022.service: Deactivated successfully. Nov 12 20:50:05.728501 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:50:05.728660 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:50:05.735972 systemd-logind[1560]: Removed session 8. Nov 12 20:50:10.726641 systemd[1]: Started sshd@8-164.92.88.26:22-139.178.68.195:33946.service - OpenSSH per-connection server daemon (139.178.68.195:33946). Nov 12 20:50:10.791392 sshd[4163]: Accepted publickey for core from 139.178.68.195 port 33946 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:10.794963 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:10.806082 systemd-logind[1560]: New session 9 of user core. Nov 12 20:50:10.810538 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:50:11.010952 sshd[4163]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:11.016270 systemd[1]: sshd@8-164.92.88.26:22-139.178.68.195:33946.service: Deactivated successfully. Nov 12 20:50:11.026180 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:50:11.031149 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:50:11.033437 systemd-logind[1560]: Removed session 9. Nov 12 20:50:16.021106 systemd[1]: Started sshd@9-164.92.88.26:22-139.178.68.195:42362.service - OpenSSH per-connection server daemon (139.178.68.195:42362). Nov 12 20:50:16.068288 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 42362 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:16.070971 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:16.080094 systemd-logind[1560]: New session 10 of user core. Nov 12 20:50:16.085522 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:50:16.266277 sshd[4180]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:16.272202 systemd[1]: sshd@9-164.92.88.26:22-139.178.68.195:42362.service: Deactivated successfully. Nov 12 20:50:16.272854 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:50:16.280048 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:50:16.282698 systemd-logind[1560]: Removed session 10. Nov 12 20:50:21.282322 systemd[1]: Started sshd@10-164.92.88.26:22-139.178.68.195:42366.service - OpenSSH per-connection server daemon (139.178.68.195:42366). Nov 12 20:50:21.335227 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 42366 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:21.338125 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:21.347239 systemd-logind[1560]: New session 11 of user core. Nov 12 20:50:21.352245 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:50:21.541038 sshd[4195]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:21.546242 systemd[1]: sshd@10-164.92.88.26:22-139.178.68.195:42366.service: Deactivated successfully. Nov 12 20:50:21.552646 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:50:21.554491 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:50:21.557376 systemd-logind[1560]: Removed session 11. Nov 12 20:50:26.555225 systemd[1]: Started sshd@11-164.92.88.26:22-139.178.68.195:44972.service - OpenSSH per-connection server daemon (139.178.68.195:44972). Nov 12 20:50:26.611801 sshd[4212]: Accepted publickey for core from 139.178.68.195 port 44972 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:26.615289 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:26.622802 systemd-logind[1560]: New session 12 of user core. Nov 12 20:50:26.632521 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:50:26.804950 sshd[4212]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:26.815251 systemd[1]: Started sshd@12-164.92.88.26:22-139.178.68.195:44982.service - OpenSSH per-connection server daemon (139.178.68.195:44982). Nov 12 20:50:26.817585 systemd[1]: sshd@11-164.92.88.26:22-139.178.68.195:44972.service: Deactivated successfully. Nov 12 20:50:26.826265 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:50:26.828514 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:50:26.832674 systemd-logind[1560]: Removed session 12. Nov 12 20:50:26.887889 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 44982 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:26.891727 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:26.900313 systemd-logind[1560]: New session 13 of user core. Nov 12 20:50:26.910372 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:50:27.182161 sshd[4224]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:27.207860 systemd[1]: Started sshd@13-164.92.88.26:22-139.178.68.195:44998.service - OpenSSH per-connection server daemon (139.178.68.195:44998). Nov 12 20:50:27.208830 systemd[1]: sshd@12-164.92.88.26:22-139.178.68.195:44982.service: Deactivated successfully. Nov 12 20:50:27.220585 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:50:27.231246 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:50:27.244321 systemd-logind[1560]: Removed session 13. Nov 12 20:50:27.295125 sshd[4236]: Accepted publickey for core from 139.178.68.195 port 44998 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:27.298392 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:27.307333 systemd-logind[1560]: New session 14 of user core. Nov 12 20:50:27.315440 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:50:27.510947 sshd[4236]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:27.517467 systemd[1]: sshd@13-164.92.88.26:22-139.178.68.195:44998.service: Deactivated successfully. Nov 12 20:50:27.526867 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:50:27.529549 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:50:27.531429 systemd-logind[1560]: Removed session 14. Nov 12 20:50:32.525280 systemd[1]: Started sshd@14-164.92.88.26:22-139.178.68.195:45006.service - OpenSSH per-connection server daemon (139.178.68.195:45006). Nov 12 20:50:32.594285 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 45006 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:32.596693 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:32.606098 systemd-logind[1560]: New session 15 of user core. Nov 12 20:50:32.611475 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:50:32.783960 sshd[4253]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:32.789191 systemd[1]: sshd@14-164.92.88.26:22-139.178.68.195:45006.service: Deactivated successfully. Nov 12 20:50:32.797058 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:50:32.798120 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:50:32.799935 systemd-logind[1560]: Removed session 15. Nov 12 20:50:33.305837 kubelet[2753]: E1112 20:50:33.305793 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:36.308810 kubelet[2753]: E1112 20:50:36.306905 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:37.794719 systemd[1]: Started sshd@15-164.92.88.26:22-139.178.68.195:56208.service - OpenSSH per-connection server daemon (139.178.68.195:56208). Nov 12 20:50:37.855978 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 56208 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:37.858724 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:37.867065 systemd-logind[1560]: New session 16 of user core. Nov 12 20:50:37.871233 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:50:38.050087 sshd[4266]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:38.056406 systemd[1]: sshd@15-164.92.88.26:22-139.178.68.195:56208.service: Deactivated successfully. Nov 12 20:50:38.063878 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:50:38.064955 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:50:38.067547 systemd-logind[1560]: Removed session 16. Nov 12 20:50:38.308519 kubelet[2753]: E1112 20:50:38.307516 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:43.069395 systemd[1]: Started sshd@16-164.92.88.26:22-139.178.68.195:56212.service - OpenSSH per-connection server daemon (139.178.68.195:56212). Nov 12 20:50:43.123345 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 56212 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:43.126182 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:43.134586 systemd-logind[1560]: New session 17 of user core. Nov 12 20:50:43.141350 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:50:43.305712 sshd[4280]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:43.318666 systemd[1]: Started sshd@17-164.92.88.26:22-139.178.68.195:56226.service - OpenSSH per-connection server daemon (139.178.68.195:56226). Nov 12 20:50:43.319565 systemd[1]: sshd@16-164.92.88.26:22-139.178.68.195:56212.service: Deactivated successfully. Nov 12 20:50:43.325228 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:50:43.329062 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:50:43.332042 systemd-logind[1560]: Removed session 17. Nov 12 20:50:43.384774 sshd[4291]: Accepted publickey for core from 139.178.68.195 port 56226 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:43.387339 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:43.395732 systemd-logind[1560]: New session 18 of user core. Nov 12 20:50:43.403283 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:50:44.135407 sshd[4291]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:44.145310 systemd[1]: Started sshd@18-164.92.88.26:22-139.178.68.195:56232.service - OpenSSH per-connection server daemon (139.178.68.195:56232). Nov 12 20:50:44.153502 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:50:44.160760 systemd[1]: sshd@17-164.92.88.26:22-139.178.68.195:56226.service: Deactivated successfully. Nov 12 20:50:44.165189 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:50:44.169658 systemd-logind[1560]: Removed session 18. Nov 12 20:50:44.232662 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 56232 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:44.235247 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:44.244400 systemd-logind[1560]: New session 19 of user core. Nov 12 20:50:44.252246 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:50:46.740322 sshd[4302]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:46.757329 systemd[1]: Started sshd@19-164.92.88.26:22-139.178.68.195:38452.service - OpenSSH per-connection server daemon (139.178.68.195:38452). Nov 12 20:50:46.763502 systemd[1]: sshd@18-164.92.88.26:22-139.178.68.195:56232.service: Deactivated successfully. Nov 12 20:50:46.796098 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:50:46.797445 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:50:46.800775 systemd-logind[1560]: Removed session 19. Nov 12 20:50:46.879490 sshd[4322]: Accepted publickey for core from 139.178.68.195 port 38452 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:46.883420 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:46.892333 systemd-logind[1560]: New session 20 of user core. Nov 12 20:50:46.901370 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:50:47.448783 sshd[4322]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:47.470446 systemd[1]: Started sshd@20-164.92.88.26:22-139.178.68.195:38464.service - OpenSSH per-connection server daemon (139.178.68.195:38464). Nov 12 20:50:47.474103 systemd[1]: sshd@19-164.92.88.26:22-139.178.68.195:38452.service: Deactivated successfully. Nov 12 20:50:47.486238 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:50:47.490884 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:50:47.495766 systemd-logind[1560]: Removed session 20. Nov 12 20:50:47.544532 sshd[4334]: Accepted publickey for core from 139.178.68.195 port 38464 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:47.547145 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:47.557502 systemd-logind[1560]: New session 21 of user core. Nov 12 20:50:47.567455 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:50:47.745846 sshd[4334]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:47.752776 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:50:47.753834 systemd[1]: sshd@20-164.92.88.26:22-139.178.68.195:38464.service: Deactivated successfully. Nov 12 20:50:47.763666 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:50:47.766203 systemd-logind[1560]: Removed session 21. Nov 12 20:50:49.306761 kubelet[2753]: E1112 20:50:49.306494 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:52.306645 kubelet[2753]: E1112 20:50:52.306042 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:50:52.758345 systemd[1]: Started sshd@21-164.92.88.26:22-139.178.68.195:38474.service - OpenSSH per-connection server daemon (139.178.68.195:38474). Nov 12 20:50:52.813581 sshd[4351]: Accepted publickey for core from 139.178.68.195 port 38474 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:52.816285 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:52.824743 systemd-logind[1560]: New session 22 of user core. Nov 12 20:50:52.833387 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:50:53.027208 sshd[4351]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:53.032987 systemd[1]: sshd@21-164.92.88.26:22-139.178.68.195:38474.service: Deactivated successfully. Nov 12 20:50:53.041403 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:50:53.042345 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:50:53.044240 systemd-logind[1560]: Removed session 22. Nov 12 20:50:58.038151 systemd[1]: Started sshd@22-164.92.88.26:22-139.178.68.195:55184.service - OpenSSH per-connection server daemon (139.178.68.195:55184). Nov 12 20:50:58.096362 sshd[4370]: Accepted publickey for core from 139.178.68.195 port 55184 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:50:58.098599 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:50:58.109066 systemd-logind[1560]: New session 23 of user core. Nov 12 20:50:58.120419 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:50:58.285796 sshd[4370]: pam_unix(sshd:session): session closed for user core Nov 12 20:50:58.290969 systemd[1]: sshd@22-164.92.88.26:22-139.178.68.195:55184.service: Deactivated successfully. Nov 12 20:50:58.299841 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:50:58.301011 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:50:58.303205 systemd-logind[1560]: Removed session 23. Nov 12 20:50:58.308299 kubelet[2753]: E1112 20:50:58.308168 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:02.307417 kubelet[2753]: E1112 20:51:02.306911 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:03.300218 systemd[1]: Started sshd@23-164.92.88.26:22-139.178.68.195:55188.service - OpenSSH per-connection server daemon (139.178.68.195:55188). Nov 12 20:51:03.351555 sshd[4383]: Accepted publickey for core from 139.178.68.195 port 55188 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:03.354411 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:03.363220 systemd-logind[1560]: New session 24 of user core. Nov 12 20:51:03.375322 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:51:03.539172 sshd[4383]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:03.545917 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:51:03.546616 systemd[1]: sshd@23-164.92.88.26:22-139.178.68.195:55188.service: Deactivated successfully. Nov 12 20:51:03.555502 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:51:03.558165 systemd-logind[1560]: Removed session 24. Nov 12 20:51:06.306104 kubelet[2753]: E1112 20:51:06.306047 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:08.552237 systemd[1]: Started sshd@24-164.92.88.26:22-139.178.68.195:51388.service - OpenSSH per-connection server daemon (139.178.68.195:51388). Nov 12 20:51:08.612490 sshd[4397]: Accepted publickey for core from 139.178.68.195 port 51388 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:08.614671 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:08.623100 systemd-logind[1560]: New session 25 of user core. Nov 12 20:51:08.630256 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:51:08.795161 sshd[4397]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:08.806453 systemd[1]: Started sshd@25-164.92.88.26:22-139.178.68.195:51390.service - OpenSSH per-connection server daemon (139.178.68.195:51390). Nov 12 20:51:08.807331 systemd[1]: sshd@24-164.92.88.26:22-139.178.68.195:51388.service: Deactivated successfully. Nov 12 20:51:08.819661 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:51:08.821831 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:51:08.826193 systemd-logind[1560]: Removed session 25. Nov 12 20:51:08.863187 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 51390 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:08.866360 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:08.874196 systemd-logind[1560]: New session 26 of user core. Nov 12 20:51:08.881263 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:51:10.856203 containerd[1590]: time="2024-11-12T20:51:10.856073497Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:51:10.904245 containerd[1590]: time="2024-11-12T20:51:10.904066177Z" level=info msg="StopContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" with timeout 30 (s)" Nov 12 20:51:10.904245 containerd[1590]: time="2024-11-12T20:51:10.904184909Z" level=info msg="StopContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" with timeout 2 (s)" Nov 12 20:51:10.907382 containerd[1590]: time="2024-11-12T20:51:10.907334053Z" level=info msg="Stop container \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" with signal terminated" Nov 12 20:51:10.908135 containerd[1590]: time="2024-11-12T20:51:10.908040034Z" level=info msg="Stop container \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" with signal terminated" Nov 12 20:51:10.919107 systemd-networkd[1223]: lxc_health: Link DOWN Nov 12 20:51:10.919137 systemd-networkd[1223]: lxc_health: Lost carrier Nov 12 20:51:11.001288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673-rootfs.mount: Deactivated successfully. Nov 12 20:51:11.012995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6-rootfs.mount: Deactivated successfully. Nov 12 20:51:11.017102 containerd[1590]: time="2024-11-12T20:51:11.016999756Z" level=info msg="shim disconnected" id=9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673 namespace=k8s.io Nov 12 20:51:11.017102 containerd[1590]: time="2024-11-12T20:51:11.017096175Z" level=warning msg="cleaning up after shim disconnected" id=9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673 namespace=k8s.io Nov 12 20:51:11.017783 containerd[1590]: time="2024-11-12T20:51:11.017114028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:11.021066 containerd[1590]: time="2024-11-12T20:51:11.020740063Z" level=info msg="shim disconnected" id=8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6 namespace=k8s.io Nov 12 20:51:11.021222 containerd[1590]: time="2024-11-12T20:51:11.021069362Z" level=warning msg="cleaning up after shim disconnected" id=8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6 namespace=k8s.io Nov 12 20:51:11.021222 containerd[1590]: time="2024-11-12T20:51:11.021086175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:11.052755 containerd[1590]: time="2024-11-12T20:51:11.052520530Z" level=info msg="StopContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" returns successfully" Nov 12 20:51:11.052755 containerd[1590]: time="2024-11-12T20:51:11.052585964Z" level=info msg="StopContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" returns successfully" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.053784745Z" level=info msg="StopPodSandbox for \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054087522Z" level=info msg="StopPodSandbox for \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054132193Z" level=info msg="Container to stop \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054151839Z" level=info msg="Container to stop \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054168198Z" level=info msg="Container to stop \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054183534Z" level=info msg="Container to stop \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.054326 containerd[1590]: time="2024-11-12T20:51:11.054198762Z" level=info msg="Container to stop \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.064942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33-shm.mount: Deactivated successfully. Nov 12 20:51:11.067166 containerd[1590]: time="2024-11-12T20:51:11.066633423Z" level=info msg="Container to stop \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 12 20:51:11.135908 containerd[1590]: time="2024-11-12T20:51:11.132063631Z" level=info msg="shim disconnected" id=479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33 namespace=k8s.io Nov 12 20:51:11.135908 containerd[1590]: time="2024-11-12T20:51:11.132146152Z" level=warning msg="cleaning up after shim disconnected" id=479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33 namespace=k8s.io Nov 12 20:51:11.135908 containerd[1590]: time="2024-11-12T20:51:11.132160859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:11.160115 containerd[1590]: time="2024-11-12T20:51:11.160027108Z" level=info msg="shim disconnected" id=319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f namespace=k8s.io Nov 12 20:51:11.160115 containerd[1590]: time="2024-11-12T20:51:11.160093651Z" level=warning msg="cleaning up after shim disconnected" id=319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f namespace=k8s.io Nov 12 20:51:11.160115 containerd[1590]: time="2024-11-12T20:51:11.160105359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:11.164848 containerd[1590]: time="2024-11-12T20:51:11.164753527Z" level=info msg="TearDown network for sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" successfully" Nov 12 20:51:11.164848 containerd[1590]: time="2024-11-12T20:51:11.164805560Z" level=info msg="StopPodSandbox for \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" returns successfully" Nov 12 20:51:11.197718 containerd[1590]: time="2024-11-12T20:51:11.196661734Z" level=info msg="TearDown network for sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" successfully" Nov 12 20:51:11.197718 containerd[1590]: time="2024-11-12T20:51:11.196765342Z" level=info msg="StopPodSandbox for \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" returns successfully" Nov 12 20:51:11.334136 kubelet[2753]: I1112 20:51:11.334070 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-cgroup\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.334963 kubelet[2753]: I1112 20:51:11.334801 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-kernel\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.334963 kubelet[2753]: I1112 20:51:11.334854 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-run\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.334963 kubelet[2753]: I1112 20:51:11.334885 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-net\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.334963 kubelet[2753]: I1112 20:51:11.334933 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c40d5f72-9e2a-4488-b81a-b0941c030539-clustermesh-secrets\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.334963 kubelet[2753]: I1112 20:51:11.334964 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-lib-modules\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335357 kubelet[2753]: I1112 20:51:11.334994 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cni-path\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335357 kubelet[2753]: I1112 20:51:11.335038 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-etc-cni-netd\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335357 kubelet[2753]: I1112 20:51:11.335127 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.335357 kubelet[2753]: I1112 20:51:11.334217 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.335357 kubelet[2753]: I1112 20:51:11.335200 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.335808 kubelet[2753]: I1112 20:51:11.335226 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.335808 kubelet[2753]: I1112 20:51:11.335252 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.335953 kubelet[2753]: I1112 20:51:11.335824 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-hubble-tls\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335953 kubelet[2753]: I1112 20:51:11.335875 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-xtables-lock\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335953 kubelet[2753]: I1112 20:51:11.335912 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-config-path\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.335953 kubelet[2753]: I1112 20:51:11.335951 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6nsz\" (UniqueName: \"kubernetes.io/projected/d712e675-c127-4723-a5c9-f628a70bc782-kube-api-access-w6nsz\") pod \"d712e675-c127-4723-a5c9-f628a70bc782\" (UID: \"d712e675-c127-4723-a5c9-f628a70bc782\") " Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.335982 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-hostproc\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336015 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdkzs\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-kube-api-access-qdkzs\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336044 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-bpf-maps\") pod \"c40d5f72-9e2a-4488-b81a-b0941c030539\" (UID: \"c40d5f72-9e2a-4488-b81a-b0941c030539\") " Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336079 2753 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d712e675-c127-4723-a5c9-f628a70bc782-cilium-config-path\") pod \"d712e675-c127-4723-a5c9-f628a70bc782\" (UID: \"d712e675-c127-4723-a5c9-f628a70bc782\") " Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336161 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-cgroup\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336184 2753 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-kernel\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.336221 kubelet[2753]: I1112 20:51:11.336202 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-run\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.336640 kubelet[2753]: I1112 20:51:11.336220 2753 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-host-proc-sys-net\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.336640 kubelet[2753]: I1112 20:51:11.336237 2753 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-lib-modules\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.336987 kubelet[2753]: I1112 20:51:11.336955 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cni-path" (OuterVolumeSpecName: "cni-path") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.337060 kubelet[2753]: I1112 20:51:11.337008 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.340733 kubelet[2753]: I1112 20:51:11.338446 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.344098 kubelet[2753]: I1112 20:51:11.344035 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-hostproc" (OuterVolumeSpecName: "hostproc") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.347635 kubelet[2753]: I1112 20:51:11.347563 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:51:11.347989 kubelet[2753]: I1112 20:51:11.347952 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 12 20:51:11.348304 kubelet[2753]: I1112 20:51:11.348262 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40d5f72-9e2a-4488-b81a-b0941c030539-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 12 20:51:11.349399 kubelet[2753]: I1112 20:51:11.349359 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d712e675-c127-4723-a5c9-f628a70bc782-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d712e675-c127-4723-a5c9-f628a70bc782" (UID: "d712e675-c127-4723-a5c9-f628a70bc782"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:51:11.351865 kubelet[2753]: I1112 20:51:11.351819 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 12 20:51:11.352019 kubelet[2753]: I1112 20:51:11.351993 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d712e675-c127-4723-a5c9-f628a70bc782-kube-api-access-w6nsz" (OuterVolumeSpecName: "kube-api-access-w6nsz") pod "d712e675-c127-4723-a5c9-f628a70bc782" (UID: "d712e675-c127-4723-a5c9-f628a70bc782"). InnerVolumeSpecName "kube-api-access-w6nsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:51:11.352469 kubelet[2753]: I1112 20:51:11.352442 2753 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-kube-api-access-qdkzs" (OuterVolumeSpecName: "kube-api-access-qdkzs") pod "c40d5f72-9e2a-4488-b81a-b0941c030539" (UID: "c40d5f72-9e2a-4488-b81a-b0941c030539"). InnerVolumeSpecName "kube-api-access-qdkzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 12 20:51:11.436837 kubelet[2753]: I1112 20:51:11.436770 2753 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qdkzs\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-kube-api-access-qdkzs\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.436837 kubelet[2753]: I1112 20:51:11.436826 2753 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-bpf-maps\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.436837 kubelet[2753]: I1112 20:51:11.436845 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d712e675-c127-4723-a5c9-f628a70bc782-cilium-config-path\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436863 2753 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-cni-path\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436892 2753 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c40d5f72-9e2a-4488-b81a-b0941c030539-clustermesh-secrets\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436910 2753 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c40d5f72-9e2a-4488-b81a-b0941c030539-hubble-tls\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436948 2753 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-etc-cni-netd\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436966 2753 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-xtables-lock\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.436984 2753 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c40d5f72-9e2a-4488-b81a-b0941c030539-cilium-config-path\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.437000 2753 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c40d5f72-9e2a-4488-b81a-b0941c030539-hostproc\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.437171 kubelet[2753]: I1112 20:51:11.437018 2753 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w6nsz\" (UniqueName: \"kubernetes.io/projected/d712e675-c127-4723-a5c9-f628a70bc782-kube-api-access-w6nsz\") on node \"ci-4081.2.0-5-c2b3883be7\" DevicePath \"\"" Nov 12 20:51:11.826379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f-rootfs.mount: Deactivated successfully. Nov 12 20:51:11.826642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f-shm.mount: Deactivated successfully. Nov 12 20:51:11.827097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33-rootfs.mount: Deactivated successfully. Nov 12 20:51:11.827302 systemd[1]: var-lib-kubelet-pods-d712e675\x2dc127\x2d4723\x2da5c9\x2df628a70bc782-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw6nsz.mount: Deactivated successfully. Nov 12 20:51:11.827490 systemd[1]: var-lib-kubelet-pods-c40d5f72\x2d9e2a\x2d4488\x2db81a\x2db0941c030539-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdkzs.mount: Deactivated successfully. Nov 12 20:51:11.827957 systemd[1]: var-lib-kubelet-pods-c40d5f72\x2d9e2a\x2d4488\x2db81a\x2db0941c030539-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 12 20:51:11.828306 systemd[1]: var-lib-kubelet-pods-c40d5f72\x2d9e2a\x2d4488\x2db81a\x2db0941c030539-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 12 20:51:11.851704 kubelet[2753]: I1112 20:51:11.851644 2753 scope.go:117] "RemoveContainer" containerID="9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673" Nov 12 20:51:11.854568 containerd[1590]: time="2024-11-12T20:51:11.854507466Z" level=info msg="RemoveContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\"" Nov 12 20:51:11.875248 containerd[1590]: time="2024-11-12T20:51:11.875139564Z" level=info msg="RemoveContainer for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" returns successfully" Nov 12 20:51:11.899890 kubelet[2753]: I1112 20:51:11.897365 2753 scope.go:117] "RemoveContainer" containerID="9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673" Nov 12 20:51:11.930672 containerd[1590]: time="2024-11-12T20:51:11.900782111Z" level=error msg="ContainerStatus for \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\": not found" Nov 12 20:51:11.931332 kubelet[2753]: E1112 20:51:11.931303 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\": not found" containerID="9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673" Nov 12 20:51:11.953322 kubelet[2753]: I1112 20:51:11.952516 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673"} err="failed to get container status \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f9c45f8fef68e2870dcdc5e4a759fdb5c4b9de48a43057b080a146a4938b673\": not found" Nov 12 20:51:11.953631 kubelet[2753]: I1112 20:51:11.953605 2753 scope.go:117] "RemoveContainer" containerID="8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6" Nov 12 20:51:11.955465 containerd[1590]: time="2024-11-12T20:51:11.955409207Z" level=info msg="RemoveContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\"" Nov 12 20:51:11.965921 containerd[1590]: time="2024-11-12T20:51:11.965838433Z" level=info msg="RemoveContainer for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" returns successfully" Nov 12 20:51:11.966224 kubelet[2753]: I1112 20:51:11.966182 2753 scope.go:117] "RemoveContainer" containerID="3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548" Nov 12 20:51:11.968158 containerd[1590]: time="2024-11-12T20:51:11.968116393Z" level=info msg="RemoveContainer for \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\"" Nov 12 20:51:11.980579 containerd[1590]: time="2024-11-12T20:51:11.980512782Z" level=info msg="RemoveContainer for \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\" returns successfully" Nov 12 20:51:11.981238 kubelet[2753]: I1112 20:51:11.981197 2753 scope.go:117] "RemoveContainer" containerID="d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606" Nov 12 20:51:11.983093 containerd[1590]: time="2024-11-12T20:51:11.982979628Z" level=info msg="RemoveContainer for \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\"" Nov 12 20:51:11.996649 containerd[1590]: time="2024-11-12T20:51:11.996446753Z" level=info msg="RemoveContainer for \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\" returns successfully" Nov 12 20:51:11.996984 kubelet[2753]: I1112 20:51:11.996945 2753 scope.go:117] "RemoveContainer" containerID="9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45" Nov 12 20:51:11.998644 containerd[1590]: time="2024-11-12T20:51:11.998594691Z" level=info msg="RemoveContainer for \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\"" Nov 12 20:51:12.010170 containerd[1590]: time="2024-11-12T20:51:12.010016221Z" level=info msg="RemoveContainer for \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\" returns successfully" Nov 12 20:51:12.010562 kubelet[2753]: I1112 20:51:12.010528 2753 scope.go:117] "RemoveContainer" containerID="e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6" Nov 12 20:51:12.012332 containerd[1590]: time="2024-11-12T20:51:12.012280484Z" level=info msg="RemoveContainer for \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\"" Nov 12 20:51:12.023086 containerd[1590]: time="2024-11-12T20:51:12.022874767Z" level=info msg="RemoveContainer for \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\" returns successfully" Nov 12 20:51:12.023400 kubelet[2753]: I1112 20:51:12.023348 2753 scope.go:117] "RemoveContainer" containerID="8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6" Nov 12 20:51:12.023948 containerd[1590]: time="2024-11-12T20:51:12.023815528Z" level=error msg="ContainerStatus for \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\": not found" Nov 12 20:51:12.024213 kubelet[2753]: E1112 20:51:12.024090 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\": not found" containerID="8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6" Nov 12 20:51:12.024213 kubelet[2753]: I1112 20:51:12.024145 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6"} err="failed to get container status \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8037ef1cae48c1e1a445151d52c7d79dad5de29ff68c1f95911936c6e36612b6\": not found" Nov 12 20:51:12.024213 kubelet[2753]: I1112 20:51:12.024165 2753 scope.go:117] "RemoveContainer" containerID="3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548" Nov 12 20:51:12.025189 kubelet[2753]: E1112 20:51:12.024737 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\": not found" containerID="3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548" Nov 12 20:51:12.025189 kubelet[2753]: I1112 20:51:12.024800 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548"} err="failed to get container status \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\": rpc error: code = NotFound desc = an error occurred when try to find container \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\": not found" Nov 12 20:51:12.025189 kubelet[2753]: I1112 20:51:12.024819 2753 scope.go:117] "RemoveContainer" containerID="d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606" Nov 12 20:51:12.025565 containerd[1590]: time="2024-11-12T20:51:12.024464397Z" level=error msg="ContainerStatus for \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3702911039e1e3905a0d664931a1022c1749acb1b005163e0e344e6687d5b548\": not found" Nov 12 20:51:12.025565 containerd[1590]: time="2024-11-12T20:51:12.025324606Z" level=error msg="ContainerStatus for \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\": not found" Nov 12 20:51:12.025997 kubelet[2753]: E1112 20:51:12.025823 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\": not found" containerID="d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606" Nov 12 20:51:12.025997 kubelet[2753]: I1112 20:51:12.025871 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606"} err="failed to get container status \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\": rpc error: code = NotFound desc = an error occurred when try to find container \"d987c3b7ab1414e2178aab988add8105c96db0699c31b95ae75a29d684f32606\": not found" Nov 12 20:51:12.025997 kubelet[2753]: I1112 20:51:12.025889 2753 scope.go:117] "RemoveContainer" containerID="9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45" Nov 12 20:51:12.026244 containerd[1590]: time="2024-11-12T20:51:12.026196009Z" level=error msg="ContainerStatus for \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\": not found" Nov 12 20:51:12.026425 kubelet[2753]: E1112 20:51:12.026388 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\": not found" containerID="9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45" Nov 12 20:51:12.026499 kubelet[2753]: I1112 20:51:12.026436 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45"} err="failed to get container status \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bbe2470e48bb50c35528acde117e540cf5449cd0417ea1f0ffb066aa33b2a45\": not found" Nov 12 20:51:12.026499 kubelet[2753]: I1112 20:51:12.026452 2753 scope.go:117] "RemoveContainer" containerID="e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6" Nov 12 20:51:12.026770 containerd[1590]: time="2024-11-12T20:51:12.026727306Z" level=error msg="ContainerStatus for \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\": not found" Nov 12 20:51:12.026944 kubelet[2753]: E1112 20:51:12.026925 2753 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\": not found" containerID="e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6" Nov 12 20:51:12.027011 kubelet[2753]: I1112 20:51:12.026966 2753 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6"} err="failed to get container status \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3773b69735c0cf66ce6f3e62e5454fe283c142b215cab7502beeecaf5f5eee6\": not found" Nov 12 20:51:12.309796 kubelet[2753]: I1112 20:51:12.309092 2753 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" path="/var/lib/kubelet/pods/c40d5f72-9e2a-4488-b81a-b0941c030539/volumes" Nov 12 20:51:12.310234 kubelet[2753]: I1112 20:51:12.310213 2753 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d712e675-c127-4723-a5c9-f628a70bc782" path="/var/lib/kubelet/pods/d712e675-c127-4723-a5c9-f628a70bc782/volumes" Nov 12 20:51:12.339867 containerd[1590]: time="2024-11-12T20:51:12.339815580Z" level=info msg="StopPodSandbox for \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\"" Nov 12 20:51:12.340257 containerd[1590]: time="2024-11-12T20:51:12.339924573Z" level=info msg="TearDown network for sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" successfully" Nov 12 20:51:12.340257 containerd[1590]: time="2024-11-12T20:51:12.339941131Z" level=info msg="StopPodSandbox for \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" returns successfully" Nov 12 20:51:12.340642 containerd[1590]: time="2024-11-12T20:51:12.340550097Z" level=info msg="RemovePodSandbox for \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\"" Nov 12 20:51:12.343278 containerd[1590]: time="2024-11-12T20:51:12.343214707Z" level=info msg="Forcibly stopping sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\"" Nov 12 20:51:12.343429 containerd[1590]: time="2024-11-12T20:51:12.343352707Z" level=info msg="TearDown network for sandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" successfully" Nov 12 20:51:12.351472 containerd[1590]: time="2024-11-12T20:51:12.351366075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:51:12.351472 containerd[1590]: time="2024-11-12T20:51:12.351471767Z" level=info msg="RemovePodSandbox \"319671297153eb4cd0102a21b8f7799a05fbe8410167eb33eb947c2d8c299c8f\" returns successfully" Nov 12 20:51:12.352305 containerd[1590]: time="2024-11-12T20:51:12.352245278Z" level=info msg="StopPodSandbox for \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\"" Nov 12 20:51:12.352400 containerd[1590]: time="2024-11-12T20:51:12.352376481Z" level=info msg="TearDown network for sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" successfully" Nov 12 20:51:12.352456 containerd[1590]: time="2024-11-12T20:51:12.352399375Z" level=info msg="StopPodSandbox for \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" returns successfully" Nov 12 20:51:12.353073 containerd[1590]: time="2024-11-12T20:51:12.352956227Z" level=info msg="RemovePodSandbox for \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\"" Nov 12 20:51:12.353073 containerd[1590]: time="2024-11-12T20:51:12.353006439Z" level=info msg="Forcibly stopping sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\"" Nov 12 20:51:12.354306 containerd[1590]: time="2024-11-12T20:51:12.353399182Z" level=info msg="TearDown network for sandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" successfully" Nov 12 20:51:12.361720 containerd[1590]: time="2024-11-12T20:51:12.361605914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:51:12.362101 containerd[1590]: time="2024-11-12T20:51:12.362062221Z" level=info msg="RemovePodSandbox \"479163a3ed89fddf9f50230664f5eaca99866ed4c950e75388f5a11811fe9e33\" returns successfully" Nov 12 20:51:12.542658 kubelet[2753]: E1112 20:51:12.542624 2753 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:51:12.676522 sshd[4408]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:12.691718 systemd[1]: Started sshd@26-164.92.88.26:22-139.178.68.195:51400.service - OpenSSH per-connection server daemon (139.178.68.195:51400). Nov 12 20:51:12.692530 systemd[1]: sshd@25-164.92.88.26:22-139.178.68.195:51390.service: Deactivated successfully. Nov 12 20:51:12.699194 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:51:12.701608 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:51:12.703376 systemd-logind[1560]: Removed session 26. Nov 12 20:51:12.753529 sshd[4579]: Accepted publickey for core from 139.178.68.195 port 51400 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:12.756820 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:12.765463 systemd-logind[1560]: New session 27 of user core. Nov 12 20:51:12.772781 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:51:13.988052 sshd[4579]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:14.002615 systemd[1]: Started sshd@27-164.92.88.26:22-139.178.68.195:51410.service - OpenSSH per-connection server daemon (139.178.68.195:51410). Nov 12 20:51:14.011745 systemd[1]: sshd@26-164.92.88.26:22-139.178.68.195:51400.service: Deactivated successfully. Nov 12 20:51:14.020246 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:51:14.022648 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:51:14.035614 systemd-logind[1560]: Removed session 27. Nov 12 20:51:14.078628 kubelet[2753]: I1112 20:51:14.069962 2753 topology_manager.go:215] "Topology Admit Handler" podUID="74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af" podNamespace="kube-system" podName="cilium-5bm9b" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.077205 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="mount-cgroup" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.077704 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="apply-sysctl-overwrites" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.077735 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="mount-bpf-fs" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.077746 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d712e675-c127-4723-a5c9-f628a70bc782" containerName="cilium-operator" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.077762 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="clean-cilium-state" Nov 12 20:51:14.078628 kubelet[2753]: E1112 20:51:14.078499 2753 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="cilium-agent" Nov 12 20:51:14.084278 kubelet[2753]: I1112 20:51:14.082064 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40d5f72-9e2a-4488-b81a-b0941c030539" containerName="cilium-agent" Nov 12 20:51:14.084278 kubelet[2753]: I1112 20:51:14.082297 2753 memory_manager.go:354] "RemoveStaleState removing state" podUID="d712e675-c127-4723-a5c9-f628a70bc782" containerName="cilium-operator" Nov 12 20:51:14.161086 sshd[4590]: Accepted publickey for core from 139.178.68.195 port 51410 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:14.166446 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:14.188873 systemd-logind[1560]: New session 28 of user core. Nov 12 20:51:14.195414 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:51:14.267080 sshd[4590]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274037 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-bpf-maps\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274121 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-lib-modules\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274157 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-cilium-config-path\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274189 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-host-proc-sys-kernel\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274230 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-xtables-lock\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276305 kubelet[2753]: I1112 20:51:14.274272 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-cni-path\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276818 kubelet[2753]: I1112 20:51:14.274302 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-etc-cni-netd\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276818 kubelet[2753]: I1112 20:51:14.274331 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-clustermesh-secrets\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276818 kubelet[2753]: I1112 20:51:14.274366 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-hubble-tls\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276818 kubelet[2753]: I1112 20:51:14.274419 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-cilium-ipsec-secrets\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.276818 kubelet[2753]: I1112 20:51:14.274459 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-host-proc-sys-net\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.277074 kubelet[2753]: I1112 20:51:14.274567 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqmtn\" (UniqueName: \"kubernetes.io/projected/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-kube-api-access-mqmtn\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.277074 kubelet[2753]: I1112 20:51:14.274619 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-cilium-run\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.277074 kubelet[2753]: I1112 20:51:14.274657 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-cilium-cgroup\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.277074 kubelet[2753]: I1112 20:51:14.274714 2753 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af-hostproc\") pod \"cilium-5bm9b\" (UID: \"74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af\") " pod="kube-system/cilium-5bm9b" Nov 12 20:51:14.285422 systemd[1]: Started sshd@28-164.92.88.26:22-139.178.68.195:51424.service - OpenSSH per-connection server daemon (139.178.68.195:51424). Nov 12 20:51:14.286236 systemd[1]: sshd@27-164.92.88.26:22-139.178.68.195:51410.service: Deactivated successfully. Nov 12 20:51:14.296048 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:51:14.300182 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:51:14.302900 systemd-logind[1560]: Removed session 28. Nov 12 20:51:14.353271 sshd[4601]: Accepted publickey for core from 139.178.68.195 port 51424 ssh2: RSA SHA256:/Mu5B3+sQwSvJNgAFIVIybGipt6f4mtp7EAYN0WVQJs Nov 12 20:51:14.356666 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:51:14.366963 systemd-logind[1560]: New session 29 of user core. Nov 12 20:51:14.375377 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 20:51:14.724191 kubelet[2753]: E1112 20:51:14.724119 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:14.725085 containerd[1590]: time="2024-11-12T20:51:14.724786689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bm9b,Uid:74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af,Namespace:kube-system,Attempt:0,}" Nov 12 20:51:14.788944 containerd[1590]: time="2024-11-12T20:51:14.788783204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:51:14.790369 containerd[1590]: time="2024-11-12T20:51:14.789913106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:51:14.790369 containerd[1590]: time="2024-11-12T20:51:14.789971398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:14.790369 containerd[1590]: time="2024-11-12T20:51:14.790161495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:51:14.857240 containerd[1590]: time="2024-11-12T20:51:14.857089430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5bm9b,Uid:74d2f4ce-4c0d-4f9e-b2ec-2a761f09f0af,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\"" Nov 12 20:51:14.858518 kubelet[2753]: E1112 20:51:14.858471 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:14.870179 containerd[1590]: time="2024-11-12T20:51:14.870105707Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 12 20:51:14.916509 containerd[1590]: time="2024-11-12T20:51:14.916269985Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e25f22eadc1bd4d83c8ea5f03e23a241a0ace1da71f8b7928b422ad657babbe2\"" Nov 12 20:51:14.920886 containerd[1590]: time="2024-11-12T20:51:14.919275365Z" level=info msg="StartContainer for \"e25f22eadc1bd4d83c8ea5f03e23a241a0ace1da71f8b7928b422ad657babbe2\"" Nov 12 20:51:15.016625 containerd[1590]: time="2024-11-12T20:51:15.016465681Z" level=info msg="StartContainer for \"e25f22eadc1bd4d83c8ea5f03e23a241a0ace1da71f8b7928b422ad657babbe2\" returns successfully" Nov 12 20:51:15.077579 containerd[1590]: time="2024-11-12T20:51:15.077309069Z" level=info msg="shim disconnected" id=e25f22eadc1bd4d83c8ea5f03e23a241a0ace1da71f8b7928b422ad657babbe2 namespace=k8s.io Nov 12 20:51:15.077579 containerd[1590]: time="2024-11-12T20:51:15.077388791Z" level=warning msg="cleaning up after shim disconnected" id=e25f22eadc1bd4d83c8ea5f03e23a241a0ace1da71f8b7928b422ad657babbe2 namespace=k8s.io Nov 12 20:51:15.077579 containerd[1590]: time="2024-11-12T20:51:15.077404517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:15.232584 kubelet[2753]: I1112 20:51:15.231532 2753 setters.go:568] "Node became not ready" node="ci-4081.2.0-5-c2b3883be7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-11-12T20:51:15Z","lastTransitionTime":"2024-11-12T20:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 12 20:51:15.888957 kubelet[2753]: E1112 20:51:15.888793 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:15.895946 containerd[1590]: time="2024-11-12T20:51:15.895887121Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 12 20:51:15.949633 containerd[1590]: time="2024-11-12T20:51:15.949574635Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff\"" Nov 12 20:51:15.951848 containerd[1590]: time="2024-11-12T20:51:15.950977835Z" level=info msg="StartContainer for \"30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff\"" Nov 12 20:51:16.047852 containerd[1590]: time="2024-11-12T20:51:16.047734565Z" level=info msg="StartContainer for \"30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff\" returns successfully" Nov 12 20:51:16.106373 containerd[1590]: time="2024-11-12T20:51:16.105889644Z" level=info msg="shim disconnected" id=30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff namespace=k8s.io Nov 12 20:51:16.106373 containerd[1590]: time="2024-11-12T20:51:16.105974916Z" level=warning msg="cleaning up after shim disconnected" id=30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff namespace=k8s.io Nov 12 20:51:16.106373 containerd[1590]: time="2024-11-12T20:51:16.105989413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:16.396764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30bd0d791343071fc801cda11c567007611933f6cfc09da80cbcb0b38a634eff-rootfs.mount: Deactivated successfully. Nov 12 20:51:16.893961 kubelet[2753]: E1112 20:51:16.893829 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:16.902522 containerd[1590]: time="2024-11-12T20:51:16.902412190Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 12 20:51:16.953224 containerd[1590]: time="2024-11-12T20:51:16.952796584Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b\"" Nov 12 20:51:16.954711 containerd[1590]: time="2024-11-12T20:51:16.954628899Z" level=info msg="StartContainer for \"f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b\"" Nov 12 20:51:17.062488 containerd[1590]: time="2024-11-12T20:51:17.062407538Z" level=info msg="StartContainer for \"f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b\" returns successfully" Nov 12 20:51:17.124652 containerd[1590]: time="2024-11-12T20:51:17.124559802Z" level=info msg="shim disconnected" id=f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b namespace=k8s.io Nov 12 20:51:17.125406 containerd[1590]: time="2024-11-12T20:51:17.125011387Z" level=warning msg="cleaning up after shim disconnected" id=f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b namespace=k8s.io Nov 12 20:51:17.125406 containerd[1590]: time="2024-11-12T20:51:17.125047498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:17.396616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9028c7443785c46fec5c72bb6de1814432d0cf6fe4576458549c8eba9fd151b-rootfs.mount: Deactivated successfully. Nov 12 20:51:17.544329 kubelet[2753]: E1112 20:51:17.544283 2753 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 12 20:51:17.899992 kubelet[2753]: E1112 20:51:17.899955 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:17.912518 containerd[1590]: time="2024-11-12T20:51:17.912213366Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 12 20:51:17.957382 containerd[1590]: time="2024-11-12T20:51:17.957176096Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa\"" Nov 12 20:51:17.959750 containerd[1590]: time="2024-11-12T20:51:17.958705195Z" level=info msg="StartContainer for \"26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa\"" Nov 12 20:51:18.065443 containerd[1590]: time="2024-11-12T20:51:18.065299948Z" level=info msg="StartContainer for \"26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa\" returns successfully" Nov 12 20:51:18.102150 containerd[1590]: time="2024-11-12T20:51:18.102036669Z" level=info msg="shim disconnected" id=26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa namespace=k8s.io Nov 12 20:51:18.102587 containerd[1590]: time="2024-11-12T20:51:18.102146264Z" level=warning msg="cleaning up after shim disconnected" id=26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa namespace=k8s.io Nov 12 20:51:18.102587 containerd[1590]: time="2024-11-12T20:51:18.102265918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:51:18.397999 systemd[1]: run-containerd-runc-k8s.io-26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa-runc.nrwiJa.mount: Deactivated successfully. Nov 12 20:51:18.399159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26300a80c32ba7eb0300b2c9510a7353c8dc800cce596522a322254c975541aa-rootfs.mount: Deactivated successfully. Nov 12 20:51:18.906858 kubelet[2753]: E1112 20:51:18.906206 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:18.916919 containerd[1590]: time="2024-11-12T20:51:18.914112030Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 12 20:51:18.976364 containerd[1590]: time="2024-11-12T20:51:18.976276081Z" level=info msg="CreateContainer within sandbox \"db4392aca8083b0d0fe410c804d815e35bac61598ed5fd6b5c3ef81622922d75\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"445ddd821a3ffb8f3e9b4d97aae22a016911d6185c27ea03117b352f5377d91b\"" Nov 12 20:51:18.978307 containerd[1590]: time="2024-11-12T20:51:18.977520692Z" level=info msg="StartContainer for \"445ddd821a3ffb8f3e9b4d97aae22a016911d6185c27ea03117b352f5377d91b\"" Nov 12 20:51:19.161356 containerd[1590]: time="2024-11-12T20:51:19.161007051Z" level=info msg="StartContainer for \"445ddd821a3ffb8f3e9b4d97aae22a016911d6185c27ea03117b352f5377d91b\" returns successfully" Nov 12 20:51:19.929162 kubelet[2753]: E1112 20:51:19.927014 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:20.011840 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 12 20:51:20.930948 kubelet[2753]: E1112 20:51:20.930882 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:21.933289 kubelet[2753]: E1112 20:51:21.933230 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:23.175478 systemd[1]: run-containerd-runc-k8s.io-445ddd821a3ffb8f3e9b4d97aae22a016911d6185c27ea03117b352f5377d91b-runc.mJdsDu.mount: Deactivated successfully. Nov 12 20:51:23.856824 systemd-networkd[1223]: lxc_health: Link UP Nov 12 20:51:23.869381 systemd-networkd[1223]: lxc_health: Gained carrier Nov 12 20:51:24.728073 kubelet[2753]: E1112 20:51:24.728017 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:24.760713 kubelet[2753]: I1112 20:51:24.757533 2753 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5bm9b" podStartSLOduration=10.757449807 podStartE2EDuration="10.757449807s" podCreationTimestamp="2024-11-12 20:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:51:19.969526805 +0000 UTC m=+127.879162010" watchObservedRunningTime="2024-11-12 20:51:24.757449807 +0000 UTC m=+132.667084988" Nov 12 20:51:24.944783 kubelet[2753]: E1112 20:51:24.944736 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:25.714820 systemd-networkd[1223]: lxc_health: Gained IPv6LL Nov 12 20:51:25.775947 kubelet[2753]: E1112 20:51:25.775590 2753 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47842->127.0.0.1:38319: write tcp 127.0.0.1:47842->127.0.0.1:38319: write: connection reset by peer Nov 12 20:51:25.950978 kubelet[2753]: E1112 20:51:25.950187 2753 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 12 20:51:27.997741 kubelet[2753]: E1112 20:51:27.997661 2753 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47854->127.0.0.1:38319: write tcp 127.0.0.1:47854->127.0.0.1:38319: write: broken pipe Nov 12 20:51:30.304416 kubelet[2753]: E1112 20:51:30.304207 2753 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50836->127.0.0.1:38319: write tcp 127.0.0.1:50836->127.0.0.1:38319: write: broken pipe Nov 12 20:51:30.318153 sshd[4601]: pam_unix(sshd:session): session closed for user core Nov 12 20:51:30.328137 systemd[1]: sshd@28-164.92.88.26:22-139.178.68.195:51424.service: Deactivated successfully. Nov 12 20:51:30.342191 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 20:51:30.346598 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Nov 12 20:51:30.350945 systemd-logind[1560]: Removed session 29.