Oct 9 07:53:25.066612 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 9 07:53:25.066660 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:53:25.066674 kernel: BIOS-provided physical RAM map: Oct 9 07:53:25.066682 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:53:25.066692 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:53:25.066701 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:53:25.066717 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 07:53:25.066725 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 07:53:25.066732 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:53:25.066744 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:53:25.066751 kernel: NX (Execute Disable) protection: active Oct 9 07:53:25.066758 kernel: APIC: Static calls initialized Oct 9 07:53:25.066765 kernel: SMBIOS 2.8 present. Oct 9 07:53:25.066772 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:53:25.066781 kernel: Hypervisor detected: KVM Oct 9 07:53:25.066793 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:53:25.066801 kernel: kvm-clock: using sched offset of 2964451083 cycles Oct 9 07:53:25.066815 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:53:25.066824 kernel: tsc: Detected 2494.138 MHz processor Oct 9 07:53:25.066833 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:53:25.066842 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:53:25.066850 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 07:53:25.066858 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:53:25.066867 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:53:25.066879 kernel: ACPI: Early table checksum verification disabled Oct 9 07:53:25.066887 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 07:53:25.066895 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066903 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066911 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066919 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:53:25.066927 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066935 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066943 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066956 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:53:25.066964 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:53:25.066972 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:53:25.066979 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:53:25.066987 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:53:25.066995 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:53:25.067004 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:53:25.067017 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:53:25.067029 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:53:25.067038 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:53:25.067046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:53:25.067055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:53:25.067063 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 07:53:25.067072 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 07:53:25.067084 kernel: Zone ranges: Oct 9 07:53:25.067093 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:53:25.067101 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 07:53:25.067110 kernel: Normal empty Oct 9 07:53:25.067118 kernel: Movable zone start for each node Oct 9 07:53:25.067128 kernel: Early memory node ranges Oct 9 07:53:25.067139 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:53:25.067149 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 07:53:25.067161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 07:53:25.067173 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:53:25.067182 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:53:25.067212 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 07:53:25.067221 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:53:25.067246 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:53:25.067255 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:53:25.067263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:53:25.067272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:53:25.067281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:53:25.067294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:53:25.067302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:53:25.067311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:53:25.067320 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:53:25.067328 kernel: TSC deadline timer available Oct 9 07:53:25.067337 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:53:25.067345 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:53:25.067354 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:53:25.067363 kernel: Booting paravirtualized kernel on KVM Oct 9 07:53:25.067381 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:53:25.067390 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:53:25.067399 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:53:25.067407 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:53:25.067415 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:53:25.067465 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:53:25.067476 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:53:25.067485 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:53:25.067510 kernel: random: crng init done Oct 9 07:53:25.067519 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:53:25.067530 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:53:25.067543 kernel: Fallback order for Node 0: 0 Oct 9 07:53:25.067555 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 07:53:25.067566 kernel: Policy zone: DMA32 Oct 9 07:53:25.067591 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:53:25.067604 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 9 07:53:25.067617 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:53:25.067637 kernel: Kernel/User page tables isolation: enabled Oct 9 07:53:25.067649 kernel: ftrace: allocating 37784 entries in 148 pages Oct 9 07:53:25.067663 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:53:25.067676 kernel: Dynamic Preempt: voluntary Oct 9 07:53:25.067689 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:53:25.067703 kernel: rcu: RCU event tracing is enabled. Oct 9 07:53:25.067715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:53:25.067727 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:53:25.067739 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:53:25.067758 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:53:25.067772 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:53:25.067787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:53:25.067799 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:53:25.067811 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:53:25.067820 kernel: Console: colour VGA+ 80x25 Oct 9 07:53:25.067829 kernel: printk: console [tty0] enabled Oct 9 07:53:25.067837 kernel: printk: console [ttyS0] enabled Oct 9 07:53:25.067846 kernel: ACPI: Core revision 20230628 Oct 9 07:53:25.067855 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:53:25.067868 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:53:25.067877 kernel: x2apic enabled Oct 9 07:53:25.067886 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:53:25.067894 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:53:25.067903 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:53:25.067912 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 07:53:25.067921 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:53:25.067930 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:53:25.067963 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:53:25.067978 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:53:25.067992 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:53:25.068007 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:53:25.068019 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:53:25.068035 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:53:25.068051 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:53:25.068065 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:53:25.068082 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:53:25.068104 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:53:25.068121 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:53:25.068137 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:53:25.068151 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:53:25.068167 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:53:25.068182 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:53:25.068234 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:53:25.068248 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 07:53:25.068270 kernel: landlock: Up and running. Oct 9 07:53:25.068283 kernel: SELinux: Initializing. Oct 9 07:53:25.068297 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:53:25.068307 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:53:25.068316 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:53:25.068326 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:25.068339 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:25.068354 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:53:25.068368 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:53:25.068389 kernel: signal: max sigframe size: 1776 Oct 9 07:53:25.068402 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:53:25.068412 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:53:25.068421 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:53:25.068431 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:53:25.068440 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:53:25.068450 kernel: .... node #0, CPUs: #1 Oct 9 07:53:25.068459 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:53:25.068468 kernel: smpboot: Max logical packages: 1 Oct 9 07:53:25.068493 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 07:53:25.068502 kernel: devtmpfs: initialized Oct 9 07:53:25.068512 kernel: x86/mm: Memory block size: 128MB Oct 9 07:53:25.068522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:53:25.068531 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:53:25.068541 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:53:25.068550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:53:25.068559 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:53:25.068568 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:53:25.068582 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:53:25.068591 kernel: audit: type=2000 audit(1728460403.756:1): state=initialized audit_enabled=0 res=1 Oct 9 07:53:25.068600 kernel: cpuidle: using governor menu Oct 9 07:53:25.068609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:53:25.068619 kernel: dca service started, version 1.12.1 Oct 9 07:53:25.068628 kernel: PCI: Using configuration type 1 for base access Oct 9 07:53:25.068638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:53:25.068647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:53:25.068657 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:53:25.068670 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:53:25.068679 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:53:25.068689 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:53:25.068698 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:53:25.068712 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:53:25.068728 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:53:25.068740 kernel: ACPI: Interpreter enabled Oct 9 07:53:25.068750 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:53:25.068759 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:53:25.068779 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:53:25.068797 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:53:25.068812 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:53:25.068827 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:53:25.069265 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:53:25.069463 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:53:25.069573 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:53:25.069595 kernel: acpiphp: Slot [3] registered Oct 9 07:53:25.069605 kernel: acpiphp: Slot [4] registered Oct 9 07:53:25.069614 kernel: acpiphp: Slot [5] registered Oct 9 07:53:25.069624 kernel: acpiphp: Slot [6] registered Oct 9 07:53:25.069633 kernel: acpiphp: Slot [7] registered Oct 9 07:53:25.069642 kernel: acpiphp: Slot [8] registered Oct 9 07:53:25.069652 kernel: acpiphp: Slot [9] registered Oct 9 07:53:25.069662 kernel: acpiphp: Slot [10] registered Oct 9 07:53:25.069671 kernel: acpiphp: Slot [11] registered Oct 9 07:53:25.069685 kernel: acpiphp: Slot [12] registered Oct 9 07:53:25.069695 kernel: acpiphp: Slot [13] registered Oct 9 07:53:25.069705 kernel: acpiphp: Slot [14] registered Oct 9 07:53:25.069714 kernel: acpiphp: Slot [15] registered Oct 9 07:53:25.069724 kernel: acpiphp: Slot [16] registered Oct 9 07:53:25.069734 kernel: acpiphp: Slot [17] registered Oct 9 07:53:25.069743 kernel: acpiphp: Slot [18] registered Oct 9 07:53:25.069752 kernel: acpiphp: Slot [19] registered Oct 9 07:53:25.069762 kernel: acpiphp: Slot [20] registered Oct 9 07:53:25.069771 kernel: acpiphp: Slot [21] registered Oct 9 07:53:25.069785 kernel: acpiphp: Slot [22] registered Oct 9 07:53:25.069794 kernel: acpiphp: Slot [23] registered Oct 9 07:53:25.069803 kernel: acpiphp: Slot [24] registered Oct 9 07:53:25.069813 kernel: acpiphp: Slot [25] registered Oct 9 07:53:25.069822 kernel: acpiphp: Slot [26] registered Oct 9 07:53:25.069831 kernel: acpiphp: Slot [27] registered Oct 9 07:53:25.069841 kernel: acpiphp: Slot [28] registered Oct 9 07:53:25.069851 kernel: acpiphp: Slot [29] registered Oct 9 07:53:25.069860 kernel: acpiphp: Slot [30] registered Oct 9 07:53:25.069873 kernel: acpiphp: Slot [31] registered Oct 9 07:53:25.069883 kernel: PCI host bridge to bus 0000:00 Oct 9 07:53:25.070028 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:53:25.070175 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:53:25.070397 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:53:25.070560 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:53:25.070708 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:53:25.070829 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:53:25.071024 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:53:25.071169 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:53:25.071325 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:53:25.071433 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:53:25.071540 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:53:25.071684 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:53:25.071862 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:53:25.071981 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:53:25.072108 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:53:25.072246 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:53:25.072379 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:53:25.072526 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:53:25.072652 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:53:25.072779 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:53:25.072892 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:53:25.073029 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:53:25.073380 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:53:25.073570 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:53:25.073726 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:53:25.073888 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:53:25.073999 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:53:25.074113 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:53:25.076407 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:53:25.076652 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:53:25.076839 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:53:25.077032 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:53:25.077345 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:53:25.077498 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:53:25.077656 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:53:25.077799 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:53:25.077909 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:53:25.078053 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:53:25.078237 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:53:25.078429 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:53:25.078608 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:53:25.078793 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:53:25.078966 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:53:25.079100 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:53:25.081358 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:53:25.081535 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:53:25.081714 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:53:25.081899 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:53:25.081925 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:53:25.081940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:53:25.081954 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:53:25.081968 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:53:25.081991 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:53:25.082005 kernel: iommu: Default domain type: Translated Oct 9 07:53:25.082018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:53:25.082032 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:53:25.082048 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:53:25.082061 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:53:25.082077 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 07:53:25.084349 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:53:25.084509 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:53:25.084640 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:53:25.084660 kernel: vgaarb: loaded Oct 9 07:53:25.084670 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:53:25.084680 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:53:25.084690 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:53:25.084704 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:53:25.084714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:53:25.084724 kernel: pnp: PnP ACPI init Oct 9 07:53:25.084733 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:53:25.084748 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:53:25.084758 kernel: NET: Registered PF_INET protocol family Oct 9 07:53:25.084768 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:53:25.084778 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:53:25.084787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:53:25.084796 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:53:25.084806 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:53:25.084815 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:53:25.084826 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:53:25.084843 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:53:25.084852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:53:25.084861 kernel: NET: Registered PF_XDP protocol family Oct 9 07:53:25.085128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:53:25.085289 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:53:25.085389 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:53:25.085502 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:53:25.085593 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:53:25.085727 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:53:25.085839 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:53:25.085854 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:53:25.085958 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 34062 usecs Oct 9 07:53:25.085971 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:53:25.085981 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:53:25.085991 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:53:25.086001 kernel: Initialise system trusted keyrings Oct 9 07:53:25.086019 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:53:25.086030 kernel: Key type asymmetric registered Oct 9 07:53:25.086039 kernel: Asymmetric key parser 'x509' registered Oct 9 07:53:25.086048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:53:25.086058 kernel: io scheduler mq-deadline registered Oct 9 07:53:25.086067 kernel: io scheduler kyber registered Oct 9 07:53:25.086077 kernel: io scheduler bfq registered Oct 9 07:53:25.086086 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:53:25.086095 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:53:25.086106 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:53:25.086119 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:53:25.086129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:53:25.086139 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:53:25.086148 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:53:25.086158 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:53:25.086167 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:53:25.087606 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:53:25.087670 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:53:25.087821 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:53:25.087962 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:53:24 UTC (1728460404) Oct 9 07:53:25.088098 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:53:25.088115 kernel: intel_pstate: CPU model not supported Oct 9 07:53:25.088129 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:53:25.088139 kernel: Segment Routing with IPv6 Oct 9 07:53:25.088153 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:53:25.088163 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:53:25.088183 kernel: Key type dns_resolver registered Oct 9 07:53:25.088993 kernel: IPI shorthand broadcast: enabled Oct 9 07:53:25.089006 kernel: sched_clock: Marking stable (1118004842, 93731072)->(1309631588, -97895674) Oct 9 07:53:25.089016 kernel: registered taskstats version 1 Oct 9 07:53:25.089025 kernel: Loading compiled-in X.509 certificates Oct 9 07:53:25.089036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 9 07:53:25.089045 kernel: Key type .fscrypt registered Oct 9 07:53:25.089054 kernel: Key type fscrypt-provisioning registered Oct 9 07:53:25.089064 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:53:25.089079 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:53:25.089088 kernel: ima: No architecture policies found Oct 9 07:53:25.089098 kernel: clk: Disabling unused clocks Oct 9 07:53:25.089108 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 9 07:53:25.089117 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:53:25.089155 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 9 07:53:25.089169 kernel: Run /init as init process Oct 9 07:53:25.089179 kernel: with arguments: Oct 9 07:53:25.089249 kernel: /init Oct 9 07:53:25.089263 kernel: with environment: Oct 9 07:53:25.089273 kernel: HOME=/ Oct 9 07:53:25.089282 kernel: TERM=linux Oct 9 07:53:25.089292 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:53:25.089307 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:53:25.089321 systemd[1]: Detected virtualization kvm. Oct 9 07:53:25.089332 systemd[1]: Detected architecture x86-64. Oct 9 07:53:25.089342 systemd[1]: Running in initrd. Oct 9 07:53:25.089357 systemd[1]: No hostname configured, using default hostname. Oct 9 07:53:25.089368 systemd[1]: Hostname set to . Oct 9 07:53:25.089378 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:53:25.089388 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:53:25.089399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:25.089409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:25.089437 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:53:25.089448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:53:25.089462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:53:25.089473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:53:25.089485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:53:25.089496 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:53:25.089507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:25.089517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:53:25.089532 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:53:25.089543 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:53:25.089553 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:53:25.089567 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:53:25.089578 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:53:25.089588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:53:25.089616 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:53:25.089627 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:53:25.089641 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:25.089654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:25.089664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:25.089675 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:53:25.089703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:53:25.089714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:53:25.089729 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:53:25.089740 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:53:25.089752 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:53:25.089763 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:53:25.089774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:25.089785 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:53:25.089848 systemd-journald[183]: Collecting audit messages is disabled. Oct 9 07:53:25.089880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:25.089891 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:53:25.089903 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:53:25.089920 systemd-journald[183]: Journal started Oct 9 07:53:25.089944 systemd-journald[183]: Runtime Journal (/run/log/journal/d563dbf06954495992883e55176ad5e1) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:53:25.098231 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:53:25.110822 systemd-modules-load[184]: Inserted module 'overlay' Oct 9 07:53:25.114484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:53:25.157292 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:53:25.161160 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 9 07:53:25.161940 kernel: Bridge firewalling registered Oct 9 07:53:25.162202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:25.167600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:25.168597 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:53:25.180518 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:25.183297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:53:25.191624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:53:25.192585 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:53:25.213638 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:25.222480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:53:25.224611 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:25.230804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:53:25.232944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:53:25.260236 dracut-cmdline[217]: dracut-dracut-053 Oct 9 07:53:25.271229 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:53:25.289348 systemd-resolved[216]: Positive Trust Anchors: Oct 9 07:53:25.290296 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:53:25.291006 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:53:25.297616 systemd-resolved[216]: Defaulting to hostname 'linux'. Oct 9 07:53:25.299899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:53:25.300684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:53:25.391328 kernel: SCSI subsystem initialized Oct 9 07:53:25.403249 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:53:25.418252 kernel: iscsi: registered transport (tcp) Oct 9 07:53:25.445365 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:53:25.445520 kernel: QLogic iSCSI HBA Driver Oct 9 07:53:25.522561 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:53:25.532991 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:53:25.570673 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:53:25.570880 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:53:25.572658 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:53:25.630289 kernel: raid6: avx2x4 gen() 17601 MB/s Oct 9 07:53:25.647292 kernel: raid6: avx2x2 gen() 16521 MB/s Oct 9 07:53:25.664280 kernel: raid6: avx2x1 gen() 15442 MB/s Oct 9 07:53:25.664433 kernel: raid6: using algorithm avx2x4 gen() 17601 MB/s Oct 9 07:53:25.682388 kernel: raid6: .... xor() 6417 MB/s, rmw enabled Oct 9 07:53:25.682513 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:53:25.711310 kernel: xor: automatically using best checksumming function avx Oct 9 07:53:25.918268 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:53:25.940811 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:53:25.950751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:53:25.978952 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 9 07:53:25.985933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:53:25.996549 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:53:26.021455 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Oct 9 07:53:26.074309 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:53:26.082730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:53:26.166182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:53:26.173868 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:53:26.204507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:53:26.206369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:53:26.208020 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:26.209501 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:53:26.218554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:53:26.263037 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:53:26.276242 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:53:26.276629 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:53:26.290219 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:53:26.304304 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:53:26.304389 kernel: GPT:9289727 != 125829119 Oct 9 07:53:26.304403 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:53:26.305301 kernel: GPT:9289727 != 125829119 Oct 9 07:53:26.306371 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:53:26.306441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:26.326384 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:53:26.339758 kernel: ACPI: bus type USB registered Oct 9 07:53:26.339850 kernel: usbcore: registered new interface driver usbfs Oct 9 07:53:26.339873 kernel: usbcore: registered new interface driver hub Oct 9 07:53:26.340713 kernel: usbcore: registered new device driver usb Oct 9 07:53:26.354928 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Oct 9 07:53:26.362477 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:53:26.375017 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:53:26.376510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:26.378929 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:26.381311 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:53:26.381596 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:26.383141 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:26.395772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:26.439226 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:53:26.439357 kernel: AES CTR mode by8 optimization enabled Oct 9 07:53:26.441228 kernel: libata version 3.00 loaded. Oct 9 07:53:26.478285 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:53:26.492368 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) Oct 9 07:53:26.502751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:26.515640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:53:26.534232 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (447) Oct 9 07:53:26.534341 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:53:26.535502 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:53:26.542513 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:53:26.544270 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:53:26.559517 kernel: hub 1-0:1.0: USB hub found Oct 9 07:53:26.565054 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:53:26.573248 kernel: scsi host1: ata_piix Oct 9 07:53:26.575059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:26.580095 kernel: scsi host2: ata_piix Oct 9 07:53:26.580545 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:53:26.580572 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:53:26.596476 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:53:26.604240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:53:26.609788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:53:26.614532 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:53:26.615221 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:53:26.623737 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:53:26.642442 disk-uuid[549]: Primary Header is updated. Oct 9 07:53:26.642442 disk-uuid[549]: Secondary Entries is updated. Oct 9 07:53:26.642442 disk-uuid[549]: Secondary Header is updated. Oct 9 07:53:26.656258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:26.666410 kernel: GPT:disk_guids don't match. Oct 9 07:53:26.666537 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:53:26.666560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:27.677251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:53:27.678617 disk-uuid[550]: The operation has completed successfully. Oct 9 07:53:27.733121 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:53:27.733321 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:53:27.740591 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:53:27.753554 sh[563]: Success Oct 9 07:53:27.772574 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:53:27.870892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:53:27.872125 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:53:27.879477 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:53:27.916366 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 9 07:53:27.916450 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:27.917358 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:53:27.919257 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:53:27.919330 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:53:27.929941 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:53:27.931514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:53:27.937767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:53:27.941579 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:53:27.960745 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:53:27.960823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:27.960837 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:27.968233 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:27.986412 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:53:27.987206 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:53:27.997593 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:53:28.005615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:53:28.190709 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:53:28.193284 ignition[656]: Ignition 2.19.0 Oct 9 07:53:28.193307 ignition[656]: Stage: fetch-offline Oct 9 07:53:28.193420 ignition[656]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:28.193437 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:28.193625 ignition[656]: parsed url from cmdline: "" Oct 9 07:53:28.193629 ignition[656]: no config URL provided Oct 9 07:53:28.193635 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:53:28.193644 ignition[656]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:53:28.193651 ignition[656]: failed to fetch config: resource requires networking Oct 9 07:53:28.193953 ignition[656]: Ignition finished successfully Oct 9 07:53:28.204613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:53:28.205463 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:53:28.269614 systemd-networkd[753]: lo: Link UP Oct 9 07:53:28.269636 systemd-networkd[753]: lo: Gained carrier Oct 9 07:53:28.273238 systemd-networkd[753]: Enumeration completed Oct 9 07:53:28.273833 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:53:28.273838 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:53:28.274998 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:53:28.275056 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:53:28.275061 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:53:28.276440 systemd-networkd[753]: eth0: Link UP Oct 9 07:53:28.276446 systemd-networkd[753]: eth0: Gained carrier Oct 9 07:53:28.276459 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:53:28.276617 systemd[1]: Reached target network.target - Network. Oct 9 07:53:28.280736 systemd-networkd[753]: eth1: Link UP Oct 9 07:53:28.280740 systemd-networkd[753]: eth1: Gained carrier Oct 9 07:53:28.280758 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:53:28.284503 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:53:28.293323 systemd-networkd[753]: eth0: DHCPv4 address 64.23.254.253/20, gateway 64.23.240.1 acquired from 169.254.169.253 Oct 9 07:53:28.299365 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Oct 9 07:53:28.315116 ignition[756]: Ignition 2.19.0 Oct 9 07:53:28.316209 ignition[756]: Stage: fetch Oct 9 07:53:28.316522 ignition[756]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:28.316537 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:28.316666 ignition[756]: parsed url from cmdline: "" Oct 9 07:53:28.316669 ignition[756]: no config URL provided Oct 9 07:53:28.316675 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:53:28.316684 ignition[756]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:53:28.316706 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:53:28.333509 ignition[756]: GET result: OK Oct 9 07:53:28.334371 ignition[756]: parsing config with SHA512: 6b49df8f31f1d215a3f9a94368b996607b9d89545c46db4d9a7058527d3d8ae205956fb397e7ca2e4a82a795982d88e80c3a6eac748383bdc5c26ebf2ecc0019 Oct 9 07:53:28.340728 unknown[756]: fetched base config from "system" Oct 9 07:53:28.340753 unknown[756]: fetched base config from "system" Oct 9 07:53:28.340761 unknown[756]: fetched user config from "digitalocean" Oct 9 07:53:28.341536 ignition[756]: fetch: fetch complete Oct 9 07:53:28.341545 ignition[756]: fetch: fetch passed Oct 9 07:53:28.341623 ignition[756]: Ignition finished successfully Oct 9 07:53:28.344733 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:53:28.350546 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:53:28.389766 ignition[764]: Ignition 2.19.0 Oct 9 07:53:28.389778 ignition[764]: Stage: kargs Oct 9 07:53:28.390024 ignition[764]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:28.390037 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:28.393391 ignition[764]: kargs: kargs passed Oct 9 07:53:28.393501 ignition[764]: Ignition finished successfully Oct 9 07:53:28.394920 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:53:28.402582 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:53:28.436510 ignition[770]: Ignition 2.19.0 Oct 9 07:53:28.436526 ignition[770]: Stage: disks Oct 9 07:53:28.436756 ignition[770]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:28.436769 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:28.437753 ignition[770]: disks: disks passed Oct 9 07:53:28.439947 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:53:28.437815 ignition[770]: Ignition finished successfully Oct 9 07:53:28.447045 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:53:28.447844 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:53:28.448943 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:53:28.449798 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:53:28.450575 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:53:28.460501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:53:28.478718 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:53:28.481334 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:53:28.487361 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:53:28.608293 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 9 07:53:28.609471 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:53:28.611083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:53:28.621462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:53:28.624552 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:53:28.627296 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:53:28.637669 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Oct 9 07:53:28.638121 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:53:28.643126 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:53:28.643159 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:28.643182 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:28.643701 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:53:28.643757 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:53:28.648948 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:53:28.651278 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:53:28.665980 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:28.672425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:53:28.740109 coreos-metadata[790]: Oct 09 07:53:28.739 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:53:28.752699 coreos-metadata[789]: Oct 09 07:53:28.752 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:53:28.753917 coreos-metadata[790]: Oct 09 07:53:28.751 INFO Fetch successful Oct 9 07:53:28.755356 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:53:28.758179 coreos-metadata[790]: Oct 09 07:53:28.758 INFO wrote hostname ci-4081.1.0-4-ec1af0061e to /sysroot/etc/hostname Oct 9 07:53:28.759478 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:53:28.764819 coreos-metadata[789]: Oct 09 07:53:28.764 INFO Fetch successful Oct 9 07:53:28.768480 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:53:28.775134 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:53:28.775328 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:53:28.777721 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:53:28.784846 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:53:28.913146 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:53:28.920392 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:53:28.923161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:53:28.935434 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:53:28.936445 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:53:28.962254 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:53:28.979328 ignition[909]: INFO : Ignition 2.19.0 Oct 9 07:53:28.979328 ignition[909]: INFO : Stage: mount Oct 9 07:53:28.980777 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:28.980777 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:28.980777 ignition[909]: INFO : mount: mount passed Oct 9 07:53:28.980777 ignition[909]: INFO : Ignition finished successfully Oct 9 07:53:28.982471 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:53:28.989479 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:53:29.009505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:53:29.021322 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Oct 9 07:53:29.025559 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:53:29.025648 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:53:29.025673 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:53:29.031250 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:53:29.033921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:53:29.072655 ignition[938]: INFO : Ignition 2.19.0 Oct 9 07:53:29.072655 ignition[938]: INFO : Stage: files Oct 9 07:53:29.073798 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:29.073798 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:29.075386 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:53:29.077041 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:53:29.077041 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:53:29.081500 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:53:29.082307 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:53:29.082307 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:53:29.082065 unknown[938]: wrote ssh authorized keys file for user: core Oct 9 07:53:29.085522 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:53:29.085522 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:53:29.121572 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:53:29.214140 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:53:29.215174 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 07:53:29.215174 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 9 07:53:29.373565 systemd-networkd[753]: eth0: Gained IPv6LL Oct 9 07:53:29.565944 systemd-networkd[753]: eth1: Gained IPv6LL Oct 9 07:53:29.791271 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 07:53:29.932723 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 07:53:29.932723 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:53:29.934544 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 07:53:30.336383 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 07:53:30.617162 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:53:30.617162 ignition[938]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 9 07:53:30.618763 ignition[938]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:53:30.618763 ignition[938]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:53:30.618763 ignition[938]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 9 07:53:30.618763 ignition[938]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:53:30.618763 ignition[938]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:53:30.622374 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:53:30.622374 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:53:30.622374 ignition[938]: INFO : files: files passed Oct 9 07:53:30.622374 ignition[938]: INFO : Ignition finished successfully Oct 9 07:53:30.620832 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:53:30.634624 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:53:30.639991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:53:30.642351 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:53:30.643089 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:53:30.667768 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:30.667768 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:30.671103 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:53:30.674303 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:53:30.676085 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:53:30.685588 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:53:30.732015 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:53:30.732290 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:53:30.733542 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:53:30.734179 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:53:30.734966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:53:30.746676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:53:30.763737 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:53:30.770542 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:53:30.795077 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:53:30.795750 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:30.796646 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:53:30.797625 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:53:30.797777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:53:30.799028 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:53:30.799969 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:53:30.800710 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:53:30.801435 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:53:30.802217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:53:30.803022 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:53:30.804012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:53:30.804902 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:53:30.805632 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:53:30.806398 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:53:30.807264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:53:30.807455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:53:30.808435 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:53:30.809336 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:30.810141 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:53:30.811093 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:30.812112 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:53:30.812292 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:53:30.813574 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:53:30.813767 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:53:30.814659 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:53:30.814810 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:53:30.815455 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:53:30.815665 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:53:30.827798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:53:30.828298 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:53:30.828483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:30.833672 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:53:30.834700 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:53:30.835564 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:53:30.836800 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:53:30.836940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:53:30.847626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:53:30.851746 ignition[990]: INFO : Ignition 2.19.0 Oct 9 07:53:30.851746 ignition[990]: INFO : Stage: umount Oct 9 07:53:30.856421 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:53:30.856421 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:53:30.856421 ignition[990]: INFO : umount: umount passed Oct 9 07:53:30.856421 ignition[990]: INFO : Ignition finished successfully Oct 9 07:53:30.854536 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:53:30.855594 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:53:30.855739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:53:30.861175 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:53:30.861358 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:53:30.864534 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:53:30.864623 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:53:30.865048 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:53:30.865092 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:53:30.865900 systemd[1]: Stopped target network.target - Network. Oct 9 07:53:30.866869 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:53:30.866956 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:53:30.871131 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:53:30.871623 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:53:30.876338 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:30.879674 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:53:30.919375 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:53:30.920168 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:53:30.920346 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:53:30.921128 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:53:30.921250 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:53:30.922322 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:53:30.922421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:53:30.923306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:53:30.923388 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:53:30.925877 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:53:30.928917 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:53:30.930994 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:53:30.968342 systemd-networkd[753]: eth1: DHCPv6 lease lost Oct 9 07:53:30.973336 systemd-networkd[753]: eth0: DHCPv6 lease lost Oct 9 07:53:30.974531 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:53:30.974679 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:53:30.976743 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:53:30.976924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:53:30.980740 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:53:30.980934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:53:30.983184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:53:30.983731 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:30.984304 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:53:30.984369 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:53:30.996158 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:53:30.997268 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:53:30.997400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:53:30.999304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:53:30.999384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:31.000531 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:53:31.000609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:31.001433 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:53:31.001504 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:53:31.002672 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:53:31.017910 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:53:31.018120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:53:31.019729 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:53:31.019877 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:53:31.022135 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:53:31.022221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:31.023253 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:53:31.023311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:31.024046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:53:31.024110 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:53:31.025171 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:53:31.025269 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:53:31.026492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:53:31.026554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:53:31.036541 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:53:31.037618 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:53:31.037736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:53:31.038250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:53:31.038301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:31.046871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:53:31.048209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:53:31.049815 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:53:31.059629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:53:31.072817 systemd[1]: Switching root. Oct 9 07:53:31.108417 systemd-journald[183]: Journal stopped Oct 9 07:53:32.304402 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 9 07:53:32.304500 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:53:32.304517 kernel: SELinux: policy capability open_perms=1 Oct 9 07:53:32.304533 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:53:32.304551 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:53:32.304563 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:53:32.304575 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:53:32.304587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:53:32.304616 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:53:32.304628 kernel: audit: type=1403 audit(1728460411.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:53:32.304647 systemd[1]: Successfully loaded SELinux policy in 40.482ms. Oct 9 07:53:32.304672 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.634ms. Oct 9 07:53:32.304690 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:53:32.304703 systemd[1]: Detected virtualization kvm. Oct 9 07:53:32.304716 systemd[1]: Detected architecture x86-64. Oct 9 07:53:32.304730 systemd[1]: Detected first boot. Oct 9 07:53:32.304743 systemd[1]: Hostname set to . Oct 9 07:53:32.304756 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:53:32.304769 zram_generator::config[1032]: No configuration found. Oct 9 07:53:32.304790 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:53:32.304803 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:53:32.304817 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:53:32.304831 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:53:32.304845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:53:32.304858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:53:32.304871 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:53:32.304884 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:53:32.304916 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:53:32.304934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:53:32.304946 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:53:32.304959 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:53:32.304972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:53:32.304984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:53:32.304997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:53:32.305010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:53:32.305022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:53:32.305039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:53:32.305051 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:53:32.305064 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:53:32.305077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:53:32.305094 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:53:32.305107 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:53:32.305120 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:53:32.305136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:53:32.305150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:53:32.305162 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:53:32.305176 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:53:32.305204 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:53:32.305224 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:53:32.305236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:53:32.305249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:53:32.305262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:53:32.305279 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:53:32.305292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:53:32.305305 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:53:32.305318 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:53:32.305332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:32.305344 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:53:32.305358 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:53:32.305370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:53:32.305383 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:53:32.305400 systemd[1]: Reached target machines.target - Containers. Oct 9 07:53:32.305414 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:53:32.305427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:32.305440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:53:32.305453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:53:32.305466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:53:32.305479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:53:32.305492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:53:32.305514 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:53:32.305527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:53:32.305541 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:53:32.305554 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:53:32.305566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:53:32.305578 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:53:32.305591 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:53:32.305603 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:53:32.305615 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:53:32.305631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:53:32.305644 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:53:32.305656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:53:32.305669 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:53:32.305681 systemd[1]: Stopped verity-setup.service. Oct 9 07:53:32.305694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:32.305706 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:53:32.305719 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:53:32.305735 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:53:32.305777 systemd-journald[1109]: Collecting audit messages is disabled. Oct 9 07:53:32.305805 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:53:32.305819 systemd-journald[1109]: Journal started Oct 9 07:53:32.305852 systemd-journald[1109]: Runtime Journal (/run/log/journal/d563dbf06954495992883e55176ad5e1) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:53:32.000867 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:53:32.019789 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:53:32.020368 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:53:32.310209 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:53:32.311730 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:53:32.316279 kernel: loop: module loaded Oct 9 07:53:32.335670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:53:32.337491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:53:32.339024 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:53:32.341636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:53:32.342814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:53:32.343036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:53:32.344100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:53:32.344315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:53:32.345130 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:53:32.348489 kernel: fuse: init (API version 7.39) Oct 9 07:53:32.346430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:53:32.347746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:53:32.357449 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:53:32.357696 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:53:32.359905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:53:32.373676 kernel: ACPI: bus type drm_connector registered Oct 9 07:53:32.373224 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:53:32.380104 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:53:32.380365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:53:32.382991 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:53:32.387997 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:53:32.397859 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:53:32.411214 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:53:32.411878 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:53:32.411935 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:53:32.414086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:53:32.427222 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:53:32.433443 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:53:32.434248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:53:32.445114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:53:32.447955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:53:32.449324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:53:32.454429 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:53:32.455018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:53:32.459206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:53:32.469460 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:53:32.472433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:53:32.474805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:53:32.476584 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:53:32.477125 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:53:32.477931 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:53:32.501618 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:53:32.524143 systemd-journald[1109]: Time spent on flushing to /var/log/journal/d563dbf06954495992883e55176ad5e1 is 102.734ms for 994 entries. Oct 9 07:53:32.524143 systemd-journald[1109]: System Journal (/var/log/journal/d563dbf06954495992883e55176ad5e1) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:53:32.655221 systemd-journald[1109]: Received client request to flush runtime journal. Oct 9 07:53:32.655308 kernel: loop0: detected capacity change from 0 to 140768 Oct 9 07:53:32.655336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:53:32.655356 kernel: loop1: detected capacity change from 0 to 8 Oct 9 07:53:32.541253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:53:32.542497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:53:32.557495 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:53:32.600547 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:53:32.641444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:53:32.654077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:53:32.655461 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:53:32.660270 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:53:32.665112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:53:32.673437 kernel: loop2: detected capacity change from 0 to 205544 Oct 9 07:53:32.675452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:53:32.702939 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:53:32.703397 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Oct 9 07:53:32.713891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:53:32.737599 kernel: loop3: detected capacity change from 0 to 142488 Oct 9 07:53:32.781246 kernel: loop4: detected capacity change from 0 to 140768 Oct 9 07:53:32.814452 kernel: loop5: detected capacity change from 0 to 8 Oct 9 07:53:32.817221 kernel: loop6: detected capacity change from 0 to 205544 Oct 9 07:53:32.836252 kernel: loop7: detected capacity change from 0 to 142488 Oct 9 07:53:32.856497 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:53:32.857168 (sd-merge)[1179]: Merged extensions into '/usr'. Oct 9 07:53:32.873644 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:53:32.873670 systemd[1]: Reloading... Oct 9 07:53:33.078221 zram_generator::config[1204]: No configuration found. Oct 9 07:53:33.149223 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:53:33.252257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:53:33.306350 systemd[1]: Reloading finished in 431 ms. Oct 9 07:53:33.328918 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:53:33.333676 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:53:33.342687 systemd[1]: Starting ensure-sysext.service... Oct 9 07:53:33.349138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:53:33.367736 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:53:33.367756 systemd[1]: Reloading... Oct 9 07:53:33.414739 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:53:33.415138 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:53:33.416148 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:53:33.419684 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 9 07:53:33.419773 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 9 07:53:33.425611 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:53:33.425630 systemd-tmpfiles[1249]: Skipping /boot Oct 9 07:53:33.456692 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:53:33.456714 systemd-tmpfiles[1249]: Skipping /boot Oct 9 07:53:33.501227 zram_generator::config[1276]: No configuration found. Oct 9 07:53:33.655140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:53:33.709216 systemd[1]: Reloading finished in 340 ms. Oct 9 07:53:33.722730 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:53:33.723905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:53:33.749623 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:53:33.754433 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:53:33.759625 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:53:33.772478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:53:33.777002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:53:33.779390 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:53:33.788504 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.788719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:33.800789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:53:33.805757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:53:33.810617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:53:33.812527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:53:33.812732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.824613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:53:33.828925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.829152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:33.829361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:53:33.829466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.832317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.832592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:33.843988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:53:33.844720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:53:33.844968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:33.859770 systemd[1]: Finished ensure-sysext.service. Oct 9 07:53:33.860748 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:53:33.877122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:53:33.877364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:53:33.878498 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:53:33.879387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:53:33.885868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:53:33.886150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:53:33.889301 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:53:33.895957 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:53:33.896157 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:53:33.902622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:53:33.902716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:53:33.914449 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:53:33.925520 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:53:33.928280 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:53:33.933250 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:53:33.940354 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Oct 9 07:53:33.950464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:53:33.957149 augenrules[1361]: No rules Oct 9 07:53:33.962952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:53:33.965153 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:53:34.005617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:53:34.015526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:53:34.094663 systemd-resolved[1326]: Positive Trust Anchors: Oct 9 07:53:34.094679 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:53:34.094718 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:53:34.102578 systemd-resolved[1326]: Using system hostname 'ci-4081.1.0-4-ec1af0061e'. Oct 9 07:53:34.105548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:53:34.106493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:53:34.108462 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:53:34.109383 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:53:34.146060 systemd-networkd[1371]: lo: Link UP Oct 9 07:53:34.146514 systemd-networkd[1371]: lo: Gained carrier Oct 9 07:53:34.148094 systemd-networkd[1371]: Enumeration completed Oct 9 07:53:34.148530 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:53:34.149367 systemd[1]: Reached target network.target - Network. Oct 9 07:53:34.158549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:53:34.192218 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Oct 9 07:53:34.199216 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Oct 9 07:53:34.216610 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:53:34.218465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:34.218642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:53:34.226524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:53:34.229263 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1378) Oct 9 07:53:34.234985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:53:34.239532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:53:34.241485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:53:34.241542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:53:34.241561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:53:34.251988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:53:34.252201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:53:34.262286 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:53:34.262456 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:53:34.262685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:53:34.268105 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:53:34.271003 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-6e:26:8d:cd:1b:0f.network. Oct 9 07:53:34.272215 systemd-networkd[1371]: eth0: Link UP Oct 9 07:53:34.272399 systemd-networkd[1371]: eth0: Gained carrier Oct 9 07:53:34.273612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:53:34.273710 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:53:34.276702 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:34.287688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:53:34.287911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:53:34.290880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:53:34.342974 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:53:34.351856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:53:34.372360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:53:34.377220 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:53:34.382484 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:53:34.390673 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-ca:d8:56:09:e5:9f.network. Oct 9 07:53:34.391595 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:34.391747 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:53:34.392586 systemd-networkd[1371]: eth1: Link UP Oct 9 07:53:34.392595 systemd-networkd[1371]: eth1: Gained carrier Oct 9 07:53:34.395865 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:34.396963 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:34.418263 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:53:34.465221 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:53:34.468579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:34.531825 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:53:34.531938 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:53:34.536236 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:53:34.536315 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:53:34.536368 kernel: [drm] features: -context_init Oct 9 07:53:34.538272 kernel: [drm] number of scanouts: 1 Oct 9 07:53:34.538381 kernel: [drm] number of cap sets: 0 Oct 9 07:53:34.545221 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:53:34.579030 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:53:34.579124 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:53:34.579175 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:53:34.592755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:53:34.593973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:34.610461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:53:34.663267 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:53:34.680930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:53:34.694973 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:53:34.707802 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:53:34.724856 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:53:34.761168 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:53:34.761877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:53:34.762008 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:53:34.762226 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:53:34.762394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:53:34.762753 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:53:34.762998 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:53:34.763114 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:53:34.763588 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:53:34.763644 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:53:34.764381 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:53:34.765346 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:53:34.768232 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:53:34.783693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:53:34.786253 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:53:34.789350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:53:34.791282 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:53:34.792937 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:53:34.793526 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:53:34.793558 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:53:34.802448 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:53:34.808518 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:53:34.813500 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:53:34.819653 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:53:34.836465 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:53:34.844628 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:53:34.847535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:53:34.854838 jq[1437]: false Oct 9 07:53:34.855533 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:53:34.867432 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:53:34.876600 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:53:34.891542 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:53:34.898130 coreos-metadata[1435]: Oct 09 07:53:34.897 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:53:34.905515 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:53:34.909612 coreos-metadata[1435]: Oct 09 07:53:34.908 INFO Fetch successful Oct 9 07:53:34.910321 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:53:34.922998 dbus-daemon[1436]: [system] SELinux support is enabled Oct 9 07:53:34.910932 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:53:34.919867 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:53:34.933621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:53:34.936290 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:53:34.944074 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:53:34.957920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:53:34.958147 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:53:34.980398 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:53:34.980478 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:53:34.985481 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:53:34.985658 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:53:34.985703 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:53:35.003794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:53:35.005527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:53:35.037966 jq[1451]: true Oct 9 07:53:35.034590 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:53:35.035334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:53:35.064282 extend-filesystems[1440]: Found loop4 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found loop5 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found loop6 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found loop7 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda1 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda2 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda3 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found usr Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda4 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda6 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda7 Oct 9 07:53:35.064282 extend-filesystems[1440]: Found vda9 Oct 9 07:53:35.064282 extend-filesystems[1440]: Checking size of /dev/vda9 Oct 9 07:53:35.141496 extend-filesystems[1440]: Resized partition /dev/vda9 Oct 9 07:53:35.101688 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:53:35.149697 jq[1471]: true Oct 9 07:53:35.149850 update_engine[1449]: I20241009 07:53:35.092800 1449 main.cc:92] Flatcar Update Engine starting Oct 9 07:53:35.149850 update_engine[1449]: I20241009 07:53:35.110016 1449 update_check_scheduler.cc:74] Next update check in 9m17s Oct 9 07:53:35.151371 tar[1458]: linux-amd64/helm Oct 9 07:53:35.151655 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Oct 9 07:53:35.102254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:53:35.104392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:53:35.106629 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:53:35.120600 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:53:35.174122 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:53:35.209893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1374) Oct 9 07:53:35.217633 systemd-logind[1446]: New seat seat0. Oct 9 07:53:35.226363 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:53:35.226441 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:53:35.239334 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:53:35.316918 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:53:35.364292 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:53:35.371553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:53:35.384700 systemd[1]: Starting sshkeys.service... Oct 9 07:53:35.394768 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:53:35.414565 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:53:35.424666 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:53:35.424666 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:53:35.424666 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:53:35.437866 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Oct 9 07:53:35.437866 extend-filesystems[1440]: Found vdb Oct 9 07:53:35.425080 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:53:35.440760 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:53:35.441114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:53:35.490077 coreos-metadata[1508]: Oct 09 07:53:35.489 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:53:35.503141 coreos-metadata[1508]: Oct 09 07:53:35.501 INFO Fetch successful Oct 9 07:53:35.516747 unknown[1508]: wrote ssh authorized keys file for user: core Oct 9 07:53:35.597475 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:53:35.599815 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:53:35.607991 systemd[1]: Finished sshkeys.service. Oct 9 07:53:35.734262 containerd[1472]: time="2024-10-09T07:53:35.734059788Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 9 07:53:35.816387 containerd[1472]: time="2024-10-09T07:53:35.816242657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.826517 containerd[1472]: time="2024-10-09T07:53:35.826420471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:35.826517 containerd[1472]: time="2024-10-09T07:53:35.826504851Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:53:35.826517 containerd[1472]: time="2024-10-09T07:53:35.826525183Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:53:35.826798 containerd[1472]: time="2024-10-09T07:53:35.826775600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:53:35.826830 containerd[1472]: time="2024-10-09T07:53:35.826817718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.826912 containerd[1472]: time="2024-10-09T07:53:35.826895955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:35.826937 containerd[1472]: time="2024-10-09T07:53:35.826912348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827164 containerd[1472]: time="2024-10-09T07:53:35.827140364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827164 containerd[1472]: time="2024-10-09T07:53:35.827158918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827254 containerd[1472]: time="2024-10-09T07:53:35.827172021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827254 containerd[1472]: time="2024-10-09T07:53:35.827182126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827302 containerd[1472]: time="2024-10-09T07:53:35.827287565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827631 containerd[1472]: time="2024-10-09T07:53:35.827598444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827827 containerd[1472]: time="2024-10-09T07:53:35.827798572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:53:35.827827 containerd[1472]: time="2024-10-09T07:53:35.827827201Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:53:35.827954 containerd[1472]: time="2024-10-09T07:53:35.827937641Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:53:35.828012 containerd[1472]: time="2024-10-09T07:53:35.827999148Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:53:35.836692 containerd[1472]: time="2024-10-09T07:53:35.836632512Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:53:35.836842 containerd[1472]: time="2024-10-09T07:53:35.836721317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:53:35.836842 containerd[1472]: time="2024-10-09T07:53:35.836742524Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:53:35.836842 containerd[1472]: time="2024-10-09T07:53:35.836760342Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:53:35.836842 containerd[1472]: time="2024-10-09T07:53:35.836820671Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:53:35.837037 containerd[1472]: time="2024-10-09T07:53:35.837017726Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:53:35.837430 containerd[1472]: time="2024-10-09T07:53:35.837409249Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:53:35.837577 containerd[1472]: time="2024-10-09T07:53:35.837550713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:53:35.837577 containerd[1472]: time="2024-10-09T07:53:35.837572828Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:53:35.837662 containerd[1472]: time="2024-10-09T07:53:35.837589206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:53:35.837662 containerd[1472]: time="2024-10-09T07:53:35.837606570Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837662 containerd[1472]: time="2024-10-09T07:53:35.837623147Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837662 containerd[1472]: time="2024-10-09T07:53:35.837635594Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837662 containerd[1472]: time="2024-10-09T07:53:35.837650282Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837670752Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837684834Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837697520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837709171Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837731015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837745349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837767386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.837806 containerd[1472]: time="2024-10-09T07:53:35.837791039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837807369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837826748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837843149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837860459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837879172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837894895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837906832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837918571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837938880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837958439Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837981072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.837994275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.838005 containerd[1472]: time="2024-10-09T07:53:35.838005956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840260460Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840318255Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840334231Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840347611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840358134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840389603Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840403988Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:53:35.841227 containerd[1472]: time="2024-10-09T07:53:35.840414714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:53:35.841555 containerd[1472]: time="2024-10-09T07:53:35.840777644Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:53:35.841555 containerd[1472]: time="2024-10-09T07:53:35.840842759Z" level=info msg="Connect containerd service" Oct 9 07:53:35.841555 containerd[1472]: time="2024-10-09T07:53:35.840898229Z" level=info msg="using legacy CRI server" Oct 9 07:53:35.841555 containerd[1472]: time="2024-10-09T07:53:35.840910141Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:53:35.841555 containerd[1472]: time="2024-10-09T07:53:35.841050292Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842051640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842228282Z" level=info msg="Start subscribing containerd event" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842304211Z" level=info msg="Start recovering state" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842402774Z" level=info msg="Start event monitor" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842431265Z" level=info msg="Start snapshots syncer" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842444589Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:53:35.842900 containerd[1472]: time="2024-10-09T07:53:35.842453577Z" level=info msg="Start streaming server" Oct 9 07:53:35.845028 containerd[1472]: time="2024-10-09T07:53:35.844834982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:53:35.845028 containerd[1472]: time="2024-10-09T07:53:35.844899168Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:53:35.845121 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:53:35.847126 containerd[1472]: time="2024-10-09T07:53:35.847083841Z" level=info msg="containerd successfully booted in 0.117239s" Oct 9 07:53:35.881310 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:53:35.901677 systemd-networkd[1371]: eth0: Gained IPv6LL Oct 9 07:53:35.902313 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:35.907909 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:53:35.914537 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:53:35.927817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:35.941123 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:53:35.958348 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:53:35.983868 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:53:36.020848 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:53:36.021514 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:53:36.037102 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:53:36.040546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:53:36.093542 systemd-networkd[1371]: eth1: Gained IPv6LL Oct 9 07:53:36.095017 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:36.097715 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:53:36.113419 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:53:36.123741 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:53:36.129306 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:53:36.251213 tar[1458]: linux-amd64/LICENSE Oct 9 07:53:36.252843 tar[1458]: linux-amd64/README.md Oct 9 07:53:36.273106 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:53:37.114506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:37.115884 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:53:37.119070 systemd[1]: Startup finished in 1.281s (kernel) + 6.563s (initrd) + 5.866s (userspace) = 13.711s. Oct 9 07:53:37.133795 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:53:37.550532 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:53:37.558610 systemd[1]: Started sshd@0-64.23.254.253:22-139.178.89.65:52596.service - OpenSSH per-connection server daemon (139.178.89.65:52596). Oct 9 07:53:37.630173 sshd[1570]: Accepted publickey for core from 139.178.89.65 port 52596 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:37.633780 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:37.651019 systemd-logind[1446]: New session 1 of user core. Oct 9 07:53:37.653116 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:53:37.659790 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:53:37.693381 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:53:37.702767 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:53:37.718818 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:53:38.035551 kubelet[1559]: E1009 07:53:38.035150 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:53:38.041661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:53:38.041877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:53:38.043259 systemd[1]: kubelet.service: Consumed 1.345s CPU time. Oct 9 07:53:38.114597 systemd[1574]: Queued start job for default target default.target. Oct 9 07:53:38.126316 systemd[1574]: Created slice app.slice - User Application Slice. Oct 9 07:53:38.126378 systemd[1574]: Reached target paths.target - Paths. Oct 9 07:53:38.126401 systemd[1574]: Reached target timers.target - Timers. Oct 9 07:53:38.128995 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:53:38.146626 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:53:38.146972 systemd[1574]: Reached target sockets.target - Sockets. Oct 9 07:53:38.147122 systemd[1574]: Reached target basic.target - Basic System. Oct 9 07:53:38.147340 systemd[1574]: Reached target default.target - Main User Target. Oct 9 07:53:38.147398 systemd[1574]: Startup finished in 416ms. Oct 9 07:53:38.147698 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:53:38.157310 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:53:38.231940 systemd[1]: Started sshd@1-64.23.254.253:22-139.178.89.65:52598.service - OpenSSH per-connection server daemon (139.178.89.65:52598). Oct 9 07:53:38.290398 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 52598 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:38.292569 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:38.300413 systemd-logind[1446]: New session 2 of user core. Oct 9 07:53:38.306586 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:53:38.370389 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:38.385444 systemd[1]: sshd@1-64.23.254.253:22-139.178.89.65:52598.service: Deactivated successfully. Oct 9 07:53:38.388299 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:53:38.391678 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:53:38.398802 systemd[1]: Started sshd@2-64.23.254.253:22-139.178.89.65:52614.service - OpenSSH per-connection server daemon (139.178.89.65:52614). Oct 9 07:53:38.400844 systemd-logind[1446]: Removed session 2. Oct 9 07:53:38.442794 sshd[1593]: Accepted publickey for core from 139.178.89.65 port 52614 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:38.445816 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:38.454605 systemd-logind[1446]: New session 3 of user core. Oct 9 07:53:38.459780 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:53:38.519182 sshd[1593]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:38.534749 systemd[1]: sshd@2-64.23.254.253:22-139.178.89.65:52614.service: Deactivated successfully. Oct 9 07:53:38.537023 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:53:38.538911 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:53:38.550710 systemd[1]: Started sshd@3-64.23.254.253:22-139.178.89.65:52616.service - OpenSSH per-connection server daemon (139.178.89.65:52616). Oct 9 07:53:38.553457 systemd-logind[1446]: Removed session 3. Oct 9 07:53:38.592341 sshd[1600]: Accepted publickey for core from 139.178.89.65 port 52616 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:38.594449 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:38.600530 systemd-logind[1446]: New session 4 of user core. Oct 9 07:53:38.611887 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:53:38.674488 sshd[1600]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:38.689658 systemd[1]: sshd@3-64.23.254.253:22-139.178.89.65:52616.service: Deactivated successfully. Oct 9 07:53:38.692328 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:53:38.694679 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:53:38.706378 systemd[1]: Started sshd@4-64.23.254.253:22-139.178.89.65:52626.service - OpenSSH per-connection server daemon (139.178.89.65:52626). Oct 9 07:53:38.708043 systemd-logind[1446]: Removed session 4. Oct 9 07:53:38.747286 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 52626 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:38.749232 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:38.754406 systemd-logind[1446]: New session 5 of user core. Oct 9 07:53:38.762493 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:53:38.836161 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:53:38.837042 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:38.850928 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:38.855247 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:38.867170 systemd[1]: sshd@4-64.23.254.253:22-139.178.89.65:52626.service: Deactivated successfully. Oct 9 07:53:38.870049 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:53:38.873445 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:53:38.878593 systemd[1]: Started sshd@5-64.23.254.253:22-139.178.89.65:52642.service - OpenSSH per-connection server daemon (139.178.89.65:52642). Oct 9 07:53:38.880767 systemd-logind[1446]: Removed session 5. Oct 9 07:53:38.929978 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 52642 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:38.932056 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:38.938482 systemd-logind[1446]: New session 6 of user core. Oct 9 07:53:38.947613 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:53:39.007522 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:53:39.008004 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:39.012961 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:39.020757 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:53:39.021090 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:39.037624 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:53:39.052783 auditctl[1622]: No rules Oct 9 07:53:39.053280 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:53:39.053517 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:53:39.060698 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:53:39.106851 augenrules[1640]: No rules Oct 9 07:53:39.108265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:53:39.111587 sudo[1618]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:39.115099 sshd[1615]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:39.130559 systemd[1]: sshd@5-64.23.254.253:22-139.178.89.65:52642.service: Deactivated successfully. Oct 9 07:53:39.134571 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:53:39.137048 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:53:39.148754 systemd[1]: Started sshd@6-64.23.254.253:22-139.178.89.65:52650.service - OpenSSH per-connection server daemon (139.178.89.65:52650). Oct 9 07:53:39.150891 systemd-logind[1446]: Removed session 6. Oct 9 07:53:39.195590 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 52650 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:39.197483 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:39.204675 systemd-logind[1446]: New session 7 of user core. Oct 9 07:53:39.211695 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:53:39.273435 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:53:39.273793 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:53:39.733706 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:53:39.733867 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:53:40.175350 dockerd[1666]: time="2024-10-09T07:53:40.175168279Z" level=info msg="Starting up" Oct 9 07:53:40.428841 dockerd[1666]: time="2024-10-09T07:53:40.428696329Z" level=info msg="Loading containers: start." Oct 9 07:53:40.575320 kernel: Initializing XFRM netlink socket Oct 9 07:53:40.602362 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:40.619958 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:40.662341 systemd-networkd[1371]: docker0: Link UP Oct 9 07:53:40.663084 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Oct 9 07:53:40.687140 dockerd[1666]: time="2024-10-09T07:53:40.686984024Z" level=info msg="Loading containers: done." Oct 9 07:53:40.706264 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck836538859-merged.mount: Deactivated successfully. Oct 9 07:53:40.709095 dockerd[1666]: time="2024-10-09T07:53:40.708916632Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:53:40.709095 dockerd[1666]: time="2024-10-09T07:53:40.709077928Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 9 07:53:40.709320 dockerd[1666]: time="2024-10-09T07:53:40.709298974Z" level=info msg="Daemon has completed initialization" Oct 9 07:53:40.770038 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:53:40.771587 dockerd[1666]: time="2024-10-09T07:53:40.770489920Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:53:41.382241 containerd[1472]: time="2024-10-09T07:53:41.382172214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 07:53:42.003498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114805543.mount: Deactivated successfully. Oct 9 07:53:43.111513 containerd[1472]: time="2024-10-09T07:53:43.110416724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:43.112878 containerd[1472]: time="2024-10-09T07:53:43.112814252Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 9 07:53:43.114453 containerd[1472]: time="2024-10-09T07:53:43.114404147Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:43.119262 containerd[1472]: time="2024-10-09T07:53:43.118943370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:43.124112 containerd[1472]: time="2024-10-09T07:53:43.124050934Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 1.741804127s" Oct 9 07:53:43.124112 containerd[1472]: time="2024-10-09T07:53:43.124107775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 07:53:43.126216 containerd[1472]: time="2024-10-09T07:53:43.126053816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 07:53:44.534899 containerd[1472]: time="2024-10-09T07:53:44.534831528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:44.536857 containerd[1472]: time="2024-10-09T07:53:44.536798194Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 9 07:53:44.537848 containerd[1472]: time="2024-10-09T07:53:44.537784981Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:44.541732 containerd[1472]: time="2024-10-09T07:53:44.541575476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:44.543489 containerd[1472]: time="2024-10-09T07:53:44.542890346Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.416801774s" Oct 9 07:53:44.543489 containerd[1472]: time="2024-10-09T07:53:44.542938087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 07:53:44.544079 containerd[1472]: time="2024-10-09T07:53:44.543964860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 07:53:45.703598 containerd[1472]: time="2024-10-09T07:53:45.703498450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.705153 containerd[1472]: time="2024-10-09T07:53:45.705096595Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 9 07:53:45.706159 containerd[1472]: time="2024-10-09T07:53:45.706103974Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.709833 containerd[1472]: time="2024-10-09T07:53:45.709787099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.710801 containerd[1472]: time="2024-10-09T07:53:45.710757991Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.166566574s" Oct 9 07:53:45.710801 containerd[1472]: time="2024-10-09T07:53:45.710802943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 07:53:45.711887 containerd[1472]: time="2024-10-09T07:53:45.711734328Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 07:53:46.981403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984145447.mount: Deactivated successfully. Oct 9 07:53:47.712788 containerd[1472]: time="2024-10-09T07:53:47.712714155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.714070 containerd[1472]: time="2024-10-09T07:53:47.713810534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 9 07:53:47.715232 containerd[1472]: time="2024-10-09T07:53:47.714927544Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.717816 containerd[1472]: time="2024-10-09T07:53:47.717723448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.718960 containerd[1472]: time="2024-10-09T07:53:47.718913242Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 2.007133318s" Oct 9 07:53:47.719349 containerd[1472]: time="2024-10-09T07:53:47.719100574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 07:53:47.719923 containerd[1472]: time="2024-10-09T07:53:47.719883572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:53:47.722501 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 07:53:48.293473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:53:48.301674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:48.334770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088048192.mount: Deactivated successfully. Oct 9 07:53:48.541534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:48.553480 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:53:48.653242 kubelet[1899]: E1009 07:53:48.652515 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:53:48.659707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:53:48.660870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:53:49.661457 containerd[1472]: time="2024-10-09T07:53:49.661379040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.663539 containerd[1472]: time="2024-10-09T07:53:49.662864994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:53:49.665446 containerd[1472]: time="2024-10-09T07:53:49.665380903Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.670228 containerd[1472]: time="2024-10-09T07:53:49.670066364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.672009 containerd[1472]: time="2024-10-09T07:53:49.671570666Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.951640411s" Oct 9 07:53:49.672009 containerd[1472]: time="2024-10-09T07:53:49.671621720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:53:49.672627 containerd[1472]: time="2024-10-09T07:53:49.672599375Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 07:53:50.232287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242034889.mount: Deactivated successfully. Oct 9 07:53:50.242268 containerd[1472]: time="2024-10-09T07:53:50.241454894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:50.243398 containerd[1472]: time="2024-10-09T07:53:50.243163983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 9 07:53:50.244478 containerd[1472]: time="2024-10-09T07:53:50.244412677Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:50.252220 containerd[1472]: time="2024-10-09T07:53:50.251291050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:50.252805 containerd[1472]: time="2024-10-09T07:53:50.251412835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.773793ms" Oct 9 07:53:50.253015 containerd[1472]: time="2024-10-09T07:53:50.252985709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 07:53:50.254581 containerd[1472]: time="2024-10-09T07:53:50.254544979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 07:53:50.813392 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 07:53:50.827247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121408068.mount: Deactivated successfully. Oct 9 07:53:52.827923 containerd[1472]: time="2024-10-09T07:53:52.827818882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:52.829405 containerd[1472]: time="2024-10-09T07:53:52.829327375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 9 07:53:52.830737 containerd[1472]: time="2024-10-09T07:53:52.830664487Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:52.834334 containerd[1472]: time="2024-10-09T07:53:52.834267447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:52.836086 containerd[1472]: time="2024-10-09T07:53:52.835697412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.581116636s" Oct 9 07:53:52.836086 containerd[1472]: time="2024-10-09T07:53:52.835762267Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 07:53:56.148803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:56.160899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:56.205114 systemd[1]: Reloading requested from client PID 2026 ('systemctl') (unit session-7.scope)... Oct 9 07:53:56.205136 systemd[1]: Reloading... Oct 9 07:53:56.370441 zram_generator::config[2077]: No configuration found. Oct 9 07:53:56.501143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:53:56.594489 systemd[1]: Reloading finished in 388 ms. Oct 9 07:53:56.654465 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:53:56.654585 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:53:56.654876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:56.663794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:53:56.787386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:53:56.802756 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:53:56.865607 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:56.865607 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:53:56.865607 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:53:56.867041 kubelet[2119]: I1009 07:53:56.866963 2119 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:53:57.412079 kubelet[2119]: I1009 07:53:57.411968 2119 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:53:57.412079 kubelet[2119]: I1009 07:53:57.412019 2119 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:53:57.413077 kubelet[2119]: I1009 07:53:57.412870 2119 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:53:57.445336 kubelet[2119]: I1009 07:53:57.445172 2119 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:53:57.446166 kubelet[2119]: E1009 07:53:57.446111 2119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.254.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:57.456961 kubelet[2119]: E1009 07:53:57.456881 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:53:57.457296 kubelet[2119]: I1009 07:53:57.457184 2119 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:53:57.463709 kubelet[2119]: I1009 07:53:57.463663 2119 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:53:57.466967 kubelet[2119]: I1009 07:53:57.465622 2119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:53:57.466967 kubelet[2119]: I1009 07:53:57.465975 2119 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:53:57.466967 kubelet[2119]: I1009 07:53:57.466022 2119 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-4-ec1af0061e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:53:57.466967 kubelet[2119]: I1009 07:53:57.466431 2119 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:53:57.467559 kubelet[2119]: I1009 07:53:57.466446 2119 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:53:57.467559 kubelet[2119]: I1009 07:53:57.466697 2119 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:57.470422 kubelet[2119]: I1009 07:53:57.470370 2119 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:53:57.470648 kubelet[2119]: I1009 07:53:57.470626 2119 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:53:57.470818 kubelet[2119]: I1009 07:53:57.470804 2119 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:53:57.471539 kubelet[2119]: I1009 07:53:57.471515 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:53:57.477571 kubelet[2119]: W1009 07:53:57.477237 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.254.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-4-ec1af0061e&limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:57.477571 kubelet[2119]: E1009 07:53:57.477330 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.254.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-4-ec1af0061e&limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:57.479670 kubelet[2119]: W1009 07:53:57.479599 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.254.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:57.479670 kubelet[2119]: E1009 07:53:57.479673 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.254.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:57.479902 kubelet[2119]: I1009 07:53:57.479801 2119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:53:57.485296 kubelet[2119]: I1009 07:53:57.484669 2119 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:53:57.485460 kubelet[2119]: W1009 07:53:57.485343 2119 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:53:57.486572 kubelet[2119]: I1009 07:53:57.486545 2119 server.go:1269] "Started kubelet" Oct 9 07:53:57.486923 kubelet[2119]: I1009 07:53:57.486873 2119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:53:57.491165 kubelet[2119]: I1009 07:53:57.491083 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:53:57.491728 kubelet[2119]: I1009 07:53:57.491702 2119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:53:57.492206 kubelet[2119]: I1009 07:53:57.492112 2119 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:53:57.498438 kubelet[2119]: E1009 07:53:57.495691 2119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.254.253:6443/api/v1/namespaces/default/events\": dial tcp 64.23.254.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.1.0-4-ec1af0061e.17fcb99c30b33b0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-4-ec1af0061e,UID:ci-4081.1.0-4-ec1af0061e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-4-ec1af0061e,},FirstTimestamp:2024-10-09 07:53:57.486517004 +0000 UTC m=+0.666198249,LastTimestamp:2024-10-09 07:53:57.486517004 +0000 UTC m=+0.666198249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-4-ec1af0061e,}" Oct 9 07:53:57.498438 kubelet[2119]: I1009 07:53:57.498211 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:53:57.499422 kubelet[2119]: I1009 07:53:57.499387 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:53:57.503419 kubelet[2119]: I1009 07:53:57.503385 2119 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:53:57.503577 kubelet[2119]: I1009 07:53:57.503537 2119 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:53:57.504001 kubelet[2119]: I1009 07:53:57.503654 2119 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:53:57.504840 kubelet[2119]: E1009 07:53:57.504231 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-4-ec1af0061e\" not found" Oct 9 07:53:57.504840 kubelet[2119]: E1009 07:53:57.504741 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.254.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-4-ec1af0061e?timeout=10s\": dial tcp 64.23.254.253:6443: connect: connection refused" interval="200ms" Oct 9 07:53:57.504840 kubelet[2119]: W1009 07:53:57.504820 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.254.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:57.504972 kubelet[2119]: E1009 07:53:57.504864 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.254.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:57.514150 kubelet[2119]: I1009 07:53:57.513635 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:53:57.516468 kubelet[2119]: I1009 07:53:57.516438 2119 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:53:57.516629 kubelet[2119]: I1009 07:53:57.516621 2119 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:53:57.532117 kubelet[2119]: I1009 07:53:57.531949 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:53:57.534220 kubelet[2119]: I1009 07:53:57.533685 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:53:57.534220 kubelet[2119]: I1009 07:53:57.533741 2119 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:53:57.534220 kubelet[2119]: I1009 07:53:57.533765 2119 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:53:57.534220 kubelet[2119]: E1009 07:53:57.533820 2119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:53:57.542477 kubelet[2119]: E1009 07:53:57.542441 2119 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:53:57.543321 kubelet[2119]: W1009 07:53:57.543011 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.254.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:57.543321 kubelet[2119]: E1009 07:53:57.543075 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.254.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:57.550729 kubelet[2119]: I1009 07:53:57.550691 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:53:57.550729 kubelet[2119]: I1009 07:53:57.550714 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:53:57.550949 kubelet[2119]: I1009 07:53:57.550748 2119 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:53:57.555943 kubelet[2119]: I1009 07:53:57.555878 2119 policy_none.go:49] "None policy: Start" Oct 9 07:53:57.557519 kubelet[2119]: I1009 07:53:57.557489 2119 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:53:57.558011 kubelet[2119]: I1009 07:53:57.557814 2119 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:53:57.570769 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:53:57.582090 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:53:57.589409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:53:57.604084 kubelet[2119]: I1009 07:53:57.604023 2119 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:53:57.604353 kubelet[2119]: I1009 07:53:57.604332 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:53:57.604425 kubelet[2119]: I1009 07:53:57.604351 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:53:57.605138 kubelet[2119]: I1009 07:53:57.605117 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:53:57.608702 kubelet[2119]: E1009 07:53:57.608655 2119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.1.0-4-ec1af0061e\" not found" Oct 9 07:53:57.644759 systemd[1]: Created slice kubepods-burstable-pod0ff2918f9be8bd8ac669cc34b1c5b324.slice - libcontainer container kubepods-burstable-pod0ff2918f9be8bd8ac669cc34b1c5b324.slice. Oct 9 07:53:57.665810 systemd[1]: Created slice kubepods-burstable-pod049b4cc63b463fa0ed7e990454432a6c.slice - libcontainer container kubepods-burstable-pod049b4cc63b463fa0ed7e990454432a6c.slice. Oct 9 07:53:57.683758 systemd[1]: Created slice kubepods-burstable-pod36ec41ccb7009d50c472b8acc1ce01b6.slice - libcontainer container kubepods-burstable-pod36ec41ccb7009d50c472b8acc1ce01b6.slice. Oct 9 07:53:57.705168 kubelet[2119]: E1009 07:53:57.705112 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.254.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-4-ec1af0061e?timeout=10s\": dial tcp 64.23.254.253:6443: connect: connection refused" interval="400ms" Oct 9 07:53:57.705811 kubelet[2119]: I1009 07:53:57.705573 2119 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.705906 kubelet[2119]: E1009 07:53:57.705881 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.254.253:6443/api/v1/nodes\": dial tcp 64.23.254.253:6443: connect: connection refused" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805523 kubelet[2119]: I1009 07:53:57.805462 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805523 kubelet[2119]: I1009 07:53:57.805510 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805523 kubelet[2119]: I1009 07:53:57.805531 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805831 kubelet[2119]: I1009 07:53:57.805549 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805831 kubelet[2119]: I1009 07:53:57.805566 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ff2918f9be8bd8ac669cc34b1c5b324-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-4-ec1af0061e\" (UID: \"0ff2918f9be8bd8ac669cc34b1c5b324\") " pod="kube-system/kube-scheduler-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805831 kubelet[2119]: I1009 07:53:57.805582 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805831 kubelet[2119]: I1009 07:53:57.805599 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.805831 kubelet[2119]: I1009 07:53:57.805614 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.806024 kubelet[2119]: I1009 07:53:57.805627 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.908237 kubelet[2119]: I1009 07:53:57.908012 2119 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.908896 kubelet[2119]: E1009 07:53:57.908486 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.254.253:6443/api/v1/nodes\": dial tcp 64.23.254.253:6443: connect: connection refused" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:57.963572 kubelet[2119]: E1009 07:53:57.962704 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:57.965046 containerd[1472]: time="2024-10-09T07:53:57.964962039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-4-ec1af0061e,Uid:0ff2918f9be8bd8ac669cc34b1c5b324,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:57.968487 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 9 07:53:57.970838 kubelet[2119]: E1009 07:53:57.970528 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:57.971923 containerd[1472]: time="2024-10-09T07:53:57.971432214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-4-ec1af0061e,Uid:049b4cc63b463fa0ed7e990454432a6c,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:57.987060 kubelet[2119]: E1009 07:53:57.987016 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:57.987959 containerd[1472]: time="2024-10-09T07:53:57.987904329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-4-ec1af0061e,Uid:36ec41ccb7009d50c472b8acc1ce01b6,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:58.105651 kubelet[2119]: E1009 07:53:58.105609 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.254.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-4-ec1af0061e?timeout=10s\": dial tcp 64.23.254.253:6443: connect: connection refused" interval="800ms" Oct 9 07:53:58.310255 kubelet[2119]: I1009 07:53:58.309766 2119 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:58.310255 kubelet[2119]: E1009 07:53:58.310116 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.254.253:6443/api/v1/nodes\": dial tcp 64.23.254.253:6443: connect: connection refused" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:58.461672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854563988.mount: Deactivated successfully. Oct 9 07:53:58.469524 containerd[1472]: time="2024-10-09T07:53:58.469469620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:58.470672 containerd[1472]: time="2024-10-09T07:53:58.470626803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:58.471687 containerd[1472]: time="2024-10-09T07:53:58.471635197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:53:58.471928 containerd[1472]: time="2024-10-09T07:53:58.471893905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:53:58.491526 containerd[1472]: time="2024-10-09T07:53:58.490160224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:58.510637 containerd[1472]: time="2024-10-09T07:53:58.510556461Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:58.513226 containerd[1472]: time="2024-10-09T07:53:58.513160542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:53:58.518387 containerd[1472]: time="2024-10-09T07:53:58.518331857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:53:58.519789 containerd[1472]: time="2024-10-09T07:53:58.519746850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.196415ms" Oct 9 07:53:58.520589 containerd[1472]: time="2024-10-09T07:53:58.520545982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.435827ms" Oct 9 07:53:58.524448 containerd[1472]: time="2024-10-09T07:53:58.524397287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.084027ms" Oct 9 07:53:58.562321 kubelet[2119]: W1009 07:53:58.561758 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.254.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:58.562666 kubelet[2119]: E1009 07:53:58.562604 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.254.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:58.739389 containerd[1472]: time="2024-10-09T07:53:58.739117519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:58.740278 containerd[1472]: time="2024-10-09T07:53:58.739711107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:58.741345 containerd[1472]: time="2024-10-09T07:53:58.740729121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.741345 containerd[1472]: time="2024-10-09T07:53:58.740838249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747889487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747941514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747952859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.748046640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747599447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747936924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:58.748262 containerd[1472]: time="2024-10-09T07:53:58.747968881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.748919 containerd[1472]: time="2024-10-09T07:53:58.748772345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:58.775440 systemd[1]: Started cri-containerd-38b3ac2dfa29637c5b458cd81c90681b3f8302cc6928a5f03856051604e903a8.scope - libcontainer container 38b3ac2dfa29637c5b458cd81c90681b3f8302cc6928a5f03856051604e903a8. Oct 9 07:53:58.780769 systemd[1]: Started cri-containerd-2c37d284e470589f662ab79f5845f0f7f370cbdbee01ea347c1290227294450c.scope - libcontainer container 2c37d284e470589f662ab79f5845f0f7f370cbdbee01ea347c1290227294450c. Oct 9 07:53:58.787459 systemd[1]: Started cri-containerd-3d21e5f31aaba02d6b33be5f865f12114edb207aa8d6ccf8936cc0ac9973bf10.scope - libcontainer container 3d21e5f31aaba02d6b33be5f865f12114edb207aa8d6ccf8936cc0ac9973bf10. Oct 9 07:53:58.853646 containerd[1472]: time="2024-10-09T07:53:58.851578056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-4-ec1af0061e,Uid:049b4cc63b463fa0ed7e990454432a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"38b3ac2dfa29637c5b458cd81c90681b3f8302cc6928a5f03856051604e903a8\"" Oct 9 07:53:58.854303 kubelet[2119]: E1009 07:53:58.854098 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:58.860939 containerd[1472]: time="2024-10-09T07:53:58.860247570Z" level=info msg="CreateContainer within sandbox \"38b3ac2dfa29637c5b458cd81c90681b3f8302cc6928a5f03856051604e903a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:53:58.878358 containerd[1472]: time="2024-10-09T07:53:58.878308591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-4-ec1af0061e,Uid:0ff2918f9be8bd8ac669cc34b1c5b324,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d21e5f31aaba02d6b33be5f865f12114edb207aa8d6ccf8936cc0ac9973bf10\"" Oct 9 07:53:58.879541 kubelet[2119]: E1009 07:53:58.879510 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:58.882672 containerd[1472]: time="2024-10-09T07:53:58.882385040Z" level=info msg="CreateContainer within sandbox \"3d21e5f31aaba02d6b33be5f865f12114edb207aa8d6ccf8936cc0ac9973bf10\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:53:58.897833 kubelet[2119]: W1009 07:53:58.897752 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.254.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:58.897833 kubelet[2119]: E1009 07:53:58.897841 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.254.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:58.901823 containerd[1472]: time="2024-10-09T07:53:58.901587150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-4-ec1af0061e,Uid:36ec41ccb7009d50c472b8acc1ce01b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c37d284e470589f662ab79f5845f0f7f370cbdbee01ea347c1290227294450c\"" Oct 9 07:53:58.903510 kubelet[2119]: E1009 07:53:58.903352 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:58.906276 kubelet[2119]: E1009 07:53:58.906165 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.254.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-4-ec1af0061e?timeout=10s\": dial tcp 64.23.254.253:6443: connect: connection refused" interval="1.6s" Oct 9 07:53:58.906860 containerd[1472]: time="2024-10-09T07:53:58.906160554Z" level=info msg="CreateContainer within sandbox \"2c37d284e470589f662ab79f5845f0f7f370cbdbee01ea347c1290227294450c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:53:58.910063 containerd[1472]: time="2024-10-09T07:53:58.910010016Z" level=info msg="CreateContainer within sandbox \"38b3ac2dfa29637c5b458cd81c90681b3f8302cc6928a5f03856051604e903a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b00e752ffaa7ecc2be8b292736872f9cf47848197c18cbbfd483ca1e9e6ce6b4\"" Oct 9 07:53:58.910990 containerd[1472]: time="2024-10-09T07:53:58.910927875Z" level=info msg="StartContainer for \"b00e752ffaa7ecc2be8b292736872f9cf47848197c18cbbfd483ca1e9e6ce6b4\"" Oct 9 07:53:58.916961 kubelet[2119]: W1009 07:53:58.916712 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.254.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-4-ec1af0061e&limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:58.916961 kubelet[2119]: E1009 07:53:58.916913 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.254.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-4-ec1af0061e&limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:58.923149 containerd[1472]: time="2024-10-09T07:53:58.922966786Z" level=info msg="CreateContainer within sandbox \"3d21e5f31aaba02d6b33be5f865f12114edb207aa8d6ccf8936cc0ac9973bf10\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"79934e120684895b8718585b5f096e134ab858f8ebdd0244d4168e1a3fe7d520\"" Oct 9 07:53:58.924686 containerd[1472]: time="2024-10-09T07:53:58.924506584Z" level=info msg="StartContainer for \"79934e120684895b8718585b5f096e134ab858f8ebdd0244d4168e1a3fe7d520\"" Oct 9 07:53:58.926524 containerd[1472]: time="2024-10-09T07:53:58.926430768Z" level=info msg="CreateContainer within sandbox \"2c37d284e470589f662ab79f5845f0f7f370cbdbee01ea347c1290227294450c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7d49d4cde74a7e04ca787b93d81aee1d2c6664868b777e2df6842c9da07b5e2\"" Oct 9 07:53:58.927531 containerd[1472]: time="2024-10-09T07:53:58.927470260Z" level=info msg="StartContainer for \"a7d49d4cde74a7e04ca787b93d81aee1d2c6664868b777e2df6842c9da07b5e2\"" Oct 9 07:53:58.954428 systemd[1]: Started cri-containerd-b00e752ffaa7ecc2be8b292736872f9cf47848197c18cbbfd483ca1e9e6ce6b4.scope - libcontainer container b00e752ffaa7ecc2be8b292736872f9cf47848197c18cbbfd483ca1e9e6ce6b4. Oct 9 07:53:58.971733 systemd[1]: Started cri-containerd-79934e120684895b8718585b5f096e134ab858f8ebdd0244d4168e1a3fe7d520.scope - libcontainer container 79934e120684895b8718585b5f096e134ab858f8ebdd0244d4168e1a3fe7d520. Oct 9 07:53:58.985498 systemd[1]: Started cri-containerd-a7d49d4cde74a7e04ca787b93d81aee1d2c6664868b777e2df6842c9da07b5e2.scope - libcontainer container a7d49d4cde74a7e04ca787b93d81aee1d2c6664868b777e2df6842c9da07b5e2. Oct 9 07:53:59.052486 containerd[1472]: time="2024-10-09T07:53:59.052413202Z" level=info msg="StartContainer for \"b00e752ffaa7ecc2be8b292736872f9cf47848197c18cbbfd483ca1e9e6ce6b4\" returns successfully" Oct 9 07:53:59.077731 containerd[1472]: time="2024-10-09T07:53:59.077670074Z" level=info msg="StartContainer for \"a7d49d4cde74a7e04ca787b93d81aee1d2c6664868b777e2df6842c9da07b5e2\" returns successfully" Oct 9 07:53:59.089944 containerd[1472]: time="2024-10-09T07:53:59.089889022Z" level=info msg="StartContainer for \"79934e120684895b8718585b5f096e134ab858f8ebdd0244d4168e1a3fe7d520\" returns successfully" Oct 9 07:53:59.112390 kubelet[2119]: I1009 07:53:59.112232 2119 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:59.114086 kubelet[2119]: E1009 07:53:59.113899 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.254.253:6443/api/v1/nodes\": dial tcp 64.23.254.253:6443: connect: connection refused" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:53:59.136871 kubelet[2119]: W1009 07:53:59.136708 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.254.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.254.253:6443: connect: connection refused Oct 9 07:53:59.136871 kubelet[2119]: E1009 07:53:59.136812 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.254.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.254.253:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:53:59.561380 kubelet[2119]: E1009 07:53:59.560749 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:59.566788 kubelet[2119]: E1009 07:53:59.566672 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:59.570183 kubelet[2119]: E1009 07:53:59.570065 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:00.577787 kubelet[2119]: E1009 07:54:00.577737 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:00.718377 kubelet[2119]: I1009 07:54:00.716052 2119 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:01.578272 kubelet[2119]: E1009 07:54:01.578138 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:01.758530 kubelet[2119]: E1009 07:54:01.758482 2119 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.1.0-4-ec1af0061e\" not found" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:01.815475 kubelet[2119]: I1009 07:54:01.815415 2119 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:01.815475 kubelet[2119]: E1009 07:54:01.815477 2119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.1.0-4-ec1af0061e\": node \"ci-4081.1.0-4-ec1af0061e\" not found" Oct 9 07:54:01.864869 kubelet[2119]: E1009 07:54:01.864316 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-4-ec1af0061e.17fcb99c30b33b0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-4-ec1af0061e,UID:ci-4081.1.0-4-ec1af0061e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-4-ec1af0061e,},FirstTimestamp:2024-10-09 07:53:57.486517004 +0000 UTC m=+0.666198249,LastTimestamp:2024-10-09 07:53:57.486517004 +0000 UTC m=+0.666198249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-4-ec1af0061e,}" Oct 9 07:54:01.928004 kubelet[2119]: E1009 07:54:01.927882 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-4-ec1af0061e.17fcb99c34083945 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-4-ec1af0061e,UID:ci-4081.1.0-4-ec1af0061e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-4-ec1af0061e,},FirstTimestamp:2024-10-09 07:53:57.542418757 +0000 UTC m=+0.722100007,LastTimestamp:2024-10-09 07:53:57.542418757 +0000 UTC m=+0.722100007,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-4-ec1af0061e,}" Oct 9 07:54:01.991532 kubelet[2119]: E1009 07:54:01.991360 2119 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.1.0-4-ec1af0061e.17fcb99c34765ba5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-4-ec1af0061e,UID:ci-4081.1.0-4-ec1af0061e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.1.0-4-ec1af0061e status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-4-ec1af0061e,},FirstTimestamp:2024-10-09 07:53:57.549636517 +0000 UTC m=+0.729317759,LastTimestamp:2024-10-09 07:53:57.549636517 +0000 UTC m=+0.729317759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-4-ec1af0061e,}" Oct 9 07:54:02.479747 kubelet[2119]: I1009 07:54:02.479692 2119 apiserver.go:52] "Watching apiserver" Oct 9 07:54:02.504116 kubelet[2119]: I1009 07:54:02.504018 2119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:54:03.532486 kubelet[2119]: W1009 07:54:03.532359 2119 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:03.533387 kubelet[2119]: E1009 07:54:03.532717 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:03.580429 kubelet[2119]: E1009 07:54:03.580394 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:04.254358 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-7.scope)... Oct 9 07:54:04.254376 systemd[1]: Reloading... Oct 9 07:54:04.380218 zram_generator::config[2440]: No configuration found. Oct 9 07:54:04.502862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:54:04.599731 systemd[1]: Reloading finished in 344 ms. Oct 9 07:54:04.645790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:04.657327 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:54:04.657792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:04.657970 systemd[1]: kubelet.service: Consumed 1.139s CPU time, 112.2M memory peak, 0B memory swap peak. Oct 9 07:54:04.664756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:54:04.819481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:54:04.831820 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:54:04.889332 kubelet[2486]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:04.889332 kubelet[2486]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:54:04.889332 kubelet[2486]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:54:04.889992 kubelet[2486]: I1009 07:54:04.889285 2486 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:54:04.903116 kubelet[2486]: I1009 07:54:04.902746 2486 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:54:04.903116 kubelet[2486]: I1009 07:54:04.902785 2486 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:54:04.905080 kubelet[2486]: I1009 07:54:04.904994 2486 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:54:04.909376 kubelet[2486]: I1009 07:54:04.909044 2486 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:54:04.913254 kubelet[2486]: I1009 07:54:04.912859 2486 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:54:04.918129 kubelet[2486]: E1009 07:54:04.918074 2486 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:54:04.918129 kubelet[2486]: I1009 07:54:04.918119 2486 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:54:04.923213 kubelet[2486]: I1009 07:54:04.923065 2486 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:54:04.923353 kubelet[2486]: I1009 07:54:04.923340 2486 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:54:04.923558 kubelet[2486]: I1009 07:54:04.923507 2486 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:54:04.923810 kubelet[2486]: I1009 07:54:04.923557 2486 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-4-ec1af0061e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:54:04.923985 kubelet[2486]: I1009 07:54:04.923948 2486 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:54:04.924107 kubelet[2486]: I1009 07:54:04.924016 2486 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:54:04.924107 kubelet[2486]: I1009 07:54:04.924088 2486 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:04.924286 kubelet[2486]: I1009 07:54:04.924274 2486 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:54:04.924338 kubelet[2486]: I1009 07:54:04.924306 2486 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:54:04.924366 kubelet[2486]: I1009 07:54:04.924343 2486 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:54:04.924366 kubelet[2486]: I1009 07:54:04.924361 2486 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:54:04.930727 kubelet[2486]: I1009 07:54:04.928354 2486 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:54:04.930727 kubelet[2486]: I1009 07:54:04.928987 2486 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:54:04.930727 kubelet[2486]: I1009 07:54:04.929514 2486 server.go:1269] "Started kubelet" Oct 9 07:54:04.935847 kubelet[2486]: I1009 07:54:04.935815 2486 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:54:04.939222 kubelet[2486]: I1009 07:54:04.938851 2486 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:54:04.942737 kubelet[2486]: I1009 07:54:04.942703 2486 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:54:04.945621 kubelet[2486]: I1009 07:54:04.945523 2486 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:54:04.945899 kubelet[2486]: I1009 07:54:04.945880 2486 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:54:04.952458 kubelet[2486]: I1009 07:54:04.952409 2486 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:54:04.954128 kubelet[2486]: I1009 07:54:04.954100 2486 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:54:04.955341 kubelet[2486]: E1009 07:54:04.955301 2486 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.1.0-4-ec1af0061e\" not found" Oct 9 07:54:04.956862 kubelet[2486]: I1009 07:54:04.956838 2486 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:54:04.957043 kubelet[2486]: I1009 07:54:04.957027 2486 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:54:04.961338 kubelet[2486]: I1009 07:54:04.961297 2486 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:54:04.963688 kubelet[2486]: I1009 07:54:04.963660 2486 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:54:04.965227 kubelet[2486]: I1009 07:54:04.964309 2486 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:54:04.965227 kubelet[2486]: I1009 07:54:04.964341 2486 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:54:04.965227 kubelet[2486]: E1009 07:54:04.964389 2486 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:54:04.966736 kubelet[2486]: I1009 07:54:04.963711 2486 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:54:04.967214 kubelet[2486]: I1009 07:54:04.966958 2486 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:54:04.976920 kubelet[2486]: E1009 07:54:04.976694 2486 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:54:04.977436 kubelet[2486]: I1009 07:54:04.977239 2486 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:54:05.038899 kubelet[2486]: I1009 07:54:05.038870 2486 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:54:05.039214 kubelet[2486]: I1009 07:54:05.039033 2486 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:54:05.039214 kubelet[2486]: I1009 07:54:05.039055 2486 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:54:05.039421 kubelet[2486]: I1009 07:54:05.039407 2486 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:54:05.039487 kubelet[2486]: I1009 07:54:05.039467 2486 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:54:05.039800 kubelet[2486]: I1009 07:54:05.039525 2486 policy_none.go:49] "None policy: Start" Oct 9 07:54:05.040287 kubelet[2486]: I1009 07:54:05.040272 2486 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:54:05.041129 kubelet[2486]: I1009 07:54:05.040356 2486 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:54:05.041129 kubelet[2486]: I1009 07:54:05.040542 2486 state_mem.go:75] "Updated machine memory state" Oct 9 07:54:05.044743 kubelet[2486]: I1009 07:54:05.044723 2486 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:54:05.045322 kubelet[2486]: I1009 07:54:05.045304 2486 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:54:05.045417 kubelet[2486]: I1009 07:54:05.045389 2486 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:54:05.045668 kubelet[2486]: I1009 07:54:05.045653 2486 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:54:05.084107 kubelet[2486]: W1009 07:54:05.084026 2486 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:05.084508 kubelet[2486]: W1009 07:54:05.084471 2486 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:05.084709 kubelet[2486]: W1009 07:54:05.084673 2486 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:05.084942 kubelet[2486]: E1009 07:54:05.084914 2486 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.1.0-4-ec1af0061e\" already exists" pod="kube-system/kube-scheduler-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.146987 kubelet[2486]: I1009 07:54:05.146855 2486 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.157693 kubelet[2486]: I1009 07:54:05.157332 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.157693 kubelet[2486]: I1009 07:54:05.157379 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.157693 kubelet[2486]: I1009 07:54:05.157411 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.157693 kubelet[2486]: I1009 07:54:05.157436 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.157693 kubelet[2486]: I1009 07:54:05.157459 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.158027 kubelet[2486]: I1009 07:54:05.157508 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ff2918f9be8bd8ac669cc34b1c5b324-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-4-ec1af0061e\" (UID: \"0ff2918f9be8bd8ac669cc34b1c5b324\") " pod="kube-system/kube-scheduler-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.158027 kubelet[2486]: I1009 07:54:05.157536 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36ec41ccb7009d50c472b8acc1ce01b6-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" (UID: \"36ec41ccb7009d50c472b8acc1ce01b6\") " pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.158027 kubelet[2486]: I1009 07:54:05.157559 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.158027 kubelet[2486]: I1009 07:54:05.157586 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/049b4cc63b463fa0ed7e990454432a6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-4-ec1af0061e\" (UID: \"049b4cc63b463fa0ed7e990454432a6c\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.163623 kubelet[2486]: I1009 07:54:05.163524 2486 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.163623 kubelet[2486]: I1009 07:54:05.163628 2486 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:05.270208 sudo[2517]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 07:54:05.270728 sudo[2517]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 07:54:05.385445 kubelet[2486]: E1009 07:54:05.385398 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:05.387281 kubelet[2486]: E1009 07:54:05.386970 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:05.387281 kubelet[2486]: E1009 07:54:05.387158 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:05.927221 kubelet[2486]: I1009 07:54:05.925349 2486 apiserver.go:52] "Watching apiserver" Oct 9 07:54:05.957756 kubelet[2486]: I1009 07:54:05.957707 2486 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:54:06.013450 kubelet[2486]: E1009 07:54:06.013385 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:06.014565 kubelet[2486]: E1009 07:54:06.014538 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:06.029544 kubelet[2486]: W1009 07:54:06.029465 2486 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:54:06.029987 kubelet[2486]: E1009 07:54:06.029761 2486 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.1.0-4-ec1af0061e\" already exists" pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" Oct 9 07:54:06.030245 kubelet[2486]: E1009 07:54:06.030153 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:06.053291 sudo[2517]: pam_unix(sudo:session): session closed for user root Oct 9 07:54:06.070601 kubelet[2486]: I1009 07:54:06.069842 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.1.0-4-ec1af0061e" podStartSLOduration=1.0698099 podStartE2EDuration="1.0698099s" podCreationTimestamp="2024-10-09 07:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:06.068549598 +0000 UTC m=+1.231725405" watchObservedRunningTime="2024-10-09 07:54:06.0698099 +0000 UTC m=+1.232985707" Oct 9 07:54:06.091386 kubelet[2486]: I1009 07:54:06.091002 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.1.0-4-ec1af0061e" podStartSLOduration=1.090973953 podStartE2EDuration="1.090973953s" podCreationTimestamp="2024-10-09 07:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:06.090069935 +0000 UTC m=+1.253245747" watchObservedRunningTime="2024-10-09 07:54:06.090973953 +0000 UTC m=+1.254149768" Oct 9 07:54:06.139875 kubelet[2486]: I1009 07:54:06.139811 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.1.0-4-ec1af0061e" podStartSLOduration=3.139780395 podStartE2EDuration="3.139780395s" podCreationTimestamp="2024-10-09 07:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:06.110459003 +0000 UTC m=+1.273634808" watchObservedRunningTime="2024-10-09 07:54:06.139780395 +0000 UTC m=+1.302956186" Oct 9 07:54:07.016768 kubelet[2486]: E1009 07:54:07.016737 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:07.641961 kubelet[2486]: E1009 07:54:07.641925 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:07.712062 sudo[1651]: pam_unix(sudo:session): session closed for user root Oct 9 07:54:07.716419 sshd[1648]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:07.721579 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:54:07.721893 systemd[1]: sshd@6-64.23.254.253:22-139.178.89.65:52650.service: Deactivated successfully. Oct 9 07:54:07.724629 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:54:07.724954 systemd[1]: session-7.scope: Consumed 5.851s CPU time, 150.4M memory peak, 0B memory swap peak. Oct 9 07:54:07.725931 systemd-logind[1446]: Removed session 7. Oct 9 07:54:09.710648 kubelet[2486]: E1009 07:54:09.710116 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:10.023758 kubelet[2486]: E1009 07:54:10.023586 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:10.657556 kubelet[2486]: I1009 07:54:10.657307 2486 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:54:10.658742 containerd[1472]: time="2024-10-09T07:54:10.658654497Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:54:10.660084 kubelet[2486]: I1009 07:54:10.659432 2486 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:54:11.368621 systemd-resolved[1326]: Clock change detected. Flushing caches. Oct 9 07:54:11.370832 systemd-timesyncd[1350]: Contacted time server 5.78.62.36:123 (2.flatcar.pool.ntp.org). Oct 9 07:54:11.371016 systemd-timesyncd[1350]: Initial clock synchronization to Wed 2024-10-09 07:54:11.368462 UTC. Oct 9 07:54:12.096890 systemd[1]: Created slice kubepods-besteffort-pod50b26ec4_9a6b_4a13_9a92_d13239f1e606.slice - libcontainer container kubepods-besteffort-pod50b26ec4_9a6b_4a13_9a92_d13239f1e606.slice. Oct 9 07:54:12.124656 systemd[1]: Created slice kubepods-burstable-pod82ef0ccc_a252_48be_ba53_b5308961843a.slice - libcontainer container kubepods-burstable-pod82ef0ccc_a252_48be_ba53_b5308961843a.slice. Oct 9 07:54:12.134770 kubelet[2486]: I1009 07:54:12.133941 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhlz\" (UniqueName: \"kubernetes.io/projected/50b26ec4-9a6b-4a13-9a92-d13239f1e606-kube-api-access-gxhlz\") pod \"kube-proxy-gxxfl\" (UID: \"50b26ec4-9a6b-4a13-9a92-d13239f1e606\") " pod="kube-system/kube-proxy-gxxfl" Oct 9 07:54:12.134770 kubelet[2486]: I1009 07:54:12.134014 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-etc-cni-netd\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.134770 kubelet[2486]: I1009 07:54:12.134040 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-lib-modules\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.134770 kubelet[2486]: I1009 07:54:12.134065 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82ef0ccc-a252-48be-ba53-b5308961843a-clustermesh-secrets\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.134770 kubelet[2486]: I1009 07:54:12.134089 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-net\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137479 kubelet[2486]: I1009 07:54:12.134115 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt72j\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-kube-api-access-tt72j\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137479 kubelet[2486]: I1009 07:54:12.134157 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50b26ec4-9a6b-4a13-9a92-d13239f1e606-kube-proxy\") pod \"kube-proxy-gxxfl\" (UID: \"50b26ec4-9a6b-4a13-9a92-d13239f1e606\") " pod="kube-system/kube-proxy-gxxfl" Oct 9 07:54:12.137479 kubelet[2486]: I1009 07:54:12.134186 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50b26ec4-9a6b-4a13-9a92-d13239f1e606-xtables-lock\") pod \"kube-proxy-gxxfl\" (UID: \"50b26ec4-9a6b-4a13-9a92-d13239f1e606\") " pod="kube-system/kube-proxy-gxxfl" Oct 9 07:54:12.137479 kubelet[2486]: I1009 07:54:12.134208 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-config-path\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137479 kubelet[2486]: I1009 07:54:12.134314 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-run\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134405 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50b26ec4-9a6b-4a13-9a92-d13239f1e606-lib-modules\") pod \"kube-proxy-gxxfl\" (UID: \"50b26ec4-9a6b-4a13-9a92-d13239f1e606\") " pod="kube-system/kube-proxy-gxxfl" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134442 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-hubble-tls\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134484 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-kernel\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134515 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-hostproc\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134559 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-xtables-lock\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137701 kubelet[2486]: I1009 07:54:12.134588 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-bpf-maps\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137934 kubelet[2486]: I1009 07:54:12.134623 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-cgroup\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.137934 kubelet[2486]: I1009 07:54:12.134647 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cni-path\") pod \"cilium-pxj6b\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " pod="kube-system/cilium-pxj6b" Oct 9 07:54:12.189202 systemd[1]: Created slice kubepods-besteffort-podf2b166eb_fdeb_4063_8549_a32548acf95a.slice - libcontainer container kubepods-besteffort-podf2b166eb_fdeb_4063_8549_a32548acf95a.slice. Oct 9 07:54:12.237603 kubelet[2486]: I1009 07:54:12.237530 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2b166eb-fdeb-4063-8549-a32548acf95a-cilium-config-path\") pod \"cilium-operator-5d85765b45-j7x8s\" (UID: \"f2b166eb-fdeb-4063-8549-a32548acf95a\") " pod="kube-system/cilium-operator-5d85765b45-j7x8s" Oct 9 07:54:12.239028 kubelet[2486]: I1009 07:54:12.238283 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvql5\" (UniqueName: \"kubernetes.io/projected/f2b166eb-fdeb-4063-8549-a32548acf95a-kube-api-access-nvql5\") pod \"cilium-operator-5d85765b45-j7x8s\" (UID: \"f2b166eb-fdeb-4063-8549-a32548acf95a\") " pod="kube-system/cilium-operator-5d85765b45-j7x8s" Oct 9 07:54:12.408851 kubelet[2486]: E1009 07:54:12.408689 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.409659 containerd[1472]: time="2024-10-09T07:54:12.409603371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gxxfl,Uid:50b26ec4-9a6b-4a13-9a92-d13239f1e606,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:12.434826 kubelet[2486]: E1009 07:54:12.432105 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.435008 containerd[1472]: time="2024-10-09T07:54:12.433509171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxj6b,Uid:82ef0ccc-a252-48be-ba53-b5308961843a,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:12.497927 containerd[1472]: time="2024-10-09T07:54:12.493938568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:12.497927 containerd[1472]: time="2024-10-09T07:54:12.494046019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:12.497927 containerd[1472]: time="2024-10-09T07:54:12.494066447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.497927 containerd[1472]: time="2024-10-09T07:54:12.495020805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.498315 kubelet[2486]: E1009 07:54:12.496060 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.502842 containerd[1472]: time="2024-10-09T07:54:12.502789501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j7x8s,Uid:f2b166eb-fdeb-4063-8549-a32548acf95a,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:12.508303 containerd[1472]: time="2024-10-09T07:54:12.508052255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:12.509745 containerd[1472]: time="2024-10-09T07:54:12.508623704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:12.509745 containerd[1472]: time="2024-10-09T07:54:12.508671254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.509745 containerd[1472]: time="2024-10-09T07:54:12.508830199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.544536 systemd[1]: Started cri-containerd-572f71d0e06a910278a748e75d5f9544590f132ae5aaf2a22f952728e313b2d6.scope - libcontainer container 572f71d0e06a910278a748e75d5f9544590f132ae5aaf2a22f952728e313b2d6. Oct 9 07:54:12.560841 systemd[1]: Started cri-containerd-348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1.scope - libcontainer container 348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1. Oct 9 07:54:12.603456 containerd[1472]: time="2024-10-09T07:54:12.602519719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:12.605507 containerd[1472]: time="2024-10-09T07:54:12.603510497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:12.605507 containerd[1472]: time="2024-10-09T07:54:12.603694114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.605966 containerd[1472]: time="2024-10-09T07:54:12.605396034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:12.624279 containerd[1472]: time="2024-10-09T07:54:12.624078204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gxxfl,Uid:50b26ec4-9a6b-4a13-9a92-d13239f1e606,Namespace:kube-system,Attempt:0,} returns sandbox id \"572f71d0e06a910278a748e75d5f9544590f132ae5aaf2a22f952728e313b2d6\"" Oct 9 07:54:12.625697 kubelet[2486]: E1009 07:54:12.625639 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.634207 containerd[1472]: time="2024-10-09T07:54:12.633947029Z" level=info msg="CreateContainer within sandbox \"572f71d0e06a910278a748e75d5f9544590f132ae5aaf2a22f952728e313b2d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:54:12.644503 containerd[1472]: time="2024-10-09T07:54:12.644269961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxj6b,Uid:82ef0ccc-a252-48be-ba53-b5308961843a,Namespace:kube-system,Attempt:0,} returns sandbox id \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\"" Oct 9 07:54:12.646584 kubelet[2486]: E1009 07:54:12.646296 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.650999 containerd[1472]: time="2024-10-09T07:54:12.650193799Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 07:54:12.662524 systemd[1]: Started cri-containerd-33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0.scope - libcontainer container 33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0. Oct 9 07:54:12.695715 containerd[1472]: time="2024-10-09T07:54:12.695616605Z" level=info msg="CreateContainer within sandbox \"572f71d0e06a910278a748e75d5f9544590f132ae5aaf2a22f952728e313b2d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"005773a5e84f589246fe3ee737678c46b68f1b483cba9589873336a199de4f2e\"" Oct 9 07:54:12.697486 containerd[1472]: time="2024-10-09T07:54:12.697443096Z" level=info msg="StartContainer for \"005773a5e84f589246fe3ee737678c46b68f1b483cba9589873336a199de4f2e\"" Oct 9 07:54:12.765359 containerd[1472]: time="2024-10-09T07:54:12.765281892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j7x8s,Uid:f2b166eb-fdeb-4063-8549-a32548acf95a,Namespace:kube-system,Attempt:0,} returns sandbox id \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\"" Oct 9 07:54:12.768950 kubelet[2486]: E1009 07:54:12.767150 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:12.769564 systemd[1]: Started cri-containerd-005773a5e84f589246fe3ee737678c46b68f1b483cba9589873336a199de4f2e.scope - libcontainer container 005773a5e84f589246fe3ee737678c46b68f1b483cba9589873336a199de4f2e. Oct 9 07:54:12.836854 containerd[1472]: time="2024-10-09T07:54:12.836790533Z" level=info msg="StartContainer for \"005773a5e84f589246fe3ee737678c46b68f1b483cba9589873336a199de4f2e\" returns successfully" Oct 9 07:54:13.483453 kubelet[2486]: E1009 07:54:13.482698 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:13.508678 kubelet[2486]: I1009 07:54:13.508531 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gxxfl" podStartSLOduration=1.508510722 podStartE2EDuration="1.508510722s" podCreationTimestamp="2024-10-09 07:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:13.507860191 +0000 UTC m=+8.238111754" watchObservedRunningTime="2024-10-09 07:54:13.508510722 +0000 UTC m=+8.238762266" Oct 9 07:54:16.321054 kubelet[2486]: E1009 07:54:16.321014 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:18.083338 kubelet[2486]: E1009 07:54:18.083263 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:18.476728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067713908.mount: Deactivated successfully. Oct 9 07:54:20.848295 update_engine[1449]: I20241009 07:54:20.848193 1449 update_attempter.cc:509] Updating boot flags... Oct 9 07:54:20.880221 containerd[1472]: time="2024-10-09T07:54:20.870576349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:20.908464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) Oct 9 07:54:20.908636 containerd[1472]: time="2024-10-09T07:54:20.873350465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Oct 9 07:54:20.910489 containerd[1472]: time="2024-10-09T07:54:20.910168175Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:20.913089 containerd[1472]: time="2024-10-09T07:54:20.912867283Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.262516547s" Oct 9 07:54:20.913089 containerd[1472]: time="2024-10-09T07:54:20.913016344Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 9 07:54:20.956655 containerd[1472]: time="2024-10-09T07:54:20.956249916Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 07:54:20.975928 containerd[1472]: time="2024-10-09T07:54:20.975796845Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 07:54:21.052064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) Oct 9 07:54:21.160094 containerd[1472]: time="2024-10-09T07:54:21.159965921Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\"" Oct 9 07:54:21.168908 containerd[1472]: time="2024-10-09T07:54:21.165339779Z" level=info msg="StartContainer for \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\"" Oct 9 07:54:21.169137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) Oct 9 07:54:21.285705 systemd[1]: Started cri-containerd-9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c.scope - libcontainer container 9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c. Oct 9 07:54:21.346048 containerd[1472]: time="2024-10-09T07:54:21.346000334Z" level=info msg="StartContainer for \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\" returns successfully" Oct 9 07:54:21.353547 systemd[1]: cri-containerd-9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c.scope: Deactivated successfully. Oct 9 07:54:21.525845 kubelet[2486]: E1009 07:54:21.523109 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:21.529420 containerd[1472]: time="2024-10-09T07:54:21.495393523Z" level=info msg="shim disconnected" id=9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c namespace=k8s.io Oct 9 07:54:21.529653 containerd[1472]: time="2024-10-09T07:54:21.529626178Z" level=warning msg="cleaning up after shim disconnected" id=9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c namespace=k8s.io Oct 9 07:54:21.529705 containerd[1472]: time="2024-10-09T07:54:21.529695101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:54:22.089892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c-rootfs.mount: Deactivated successfully. Oct 9 07:54:22.190800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143371881.mount: Deactivated successfully. Oct 9 07:54:22.527140 kubelet[2486]: E1009 07:54:22.526694 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:22.531966 containerd[1472]: time="2024-10-09T07:54:22.531629281Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 07:54:22.558488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580897685.mount: Deactivated successfully. Oct 9 07:54:22.566143 containerd[1472]: time="2024-10-09T07:54:22.565816653Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\"" Oct 9 07:54:22.567595 containerd[1472]: time="2024-10-09T07:54:22.567262385Z" level=info msg="StartContainer for \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\"" Oct 9 07:54:22.621396 systemd[1]: Started cri-containerd-46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea.scope - libcontainer container 46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea. Oct 9 07:54:22.683193 containerd[1472]: time="2024-10-09T07:54:22.682922298Z" level=info msg="StartContainer for \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\" returns successfully" Oct 9 07:54:22.705326 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:54:22.705657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:54:22.705762 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:54:22.717352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:54:22.717672 systemd[1]: cri-containerd-46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea.scope: Deactivated successfully. Oct 9 07:54:22.763928 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:54:22.814664 containerd[1472]: time="2024-10-09T07:54:22.814592709Z" level=info msg="shim disconnected" id=46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea namespace=k8s.io Oct 9 07:54:22.814664 containerd[1472]: time="2024-10-09T07:54:22.814655843Z" level=warning msg="cleaning up after shim disconnected" id=46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea namespace=k8s.io Oct 9 07:54:22.814664 containerd[1472]: time="2024-10-09T07:54:22.814664927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:54:23.102224 containerd[1472]: time="2024-10-09T07:54:23.100748166Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Oct 9 07:54:23.102224 containerd[1472]: time="2024-10-09T07:54:23.102010841Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.145707491s" Oct 9 07:54:23.102224 containerd[1472]: time="2024-10-09T07:54:23.102052969Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 9 07:54:23.111143 containerd[1472]: time="2024-10-09T07:54:23.110841159Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:23.112002 containerd[1472]: time="2024-10-09T07:54:23.111957736Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:23.113438 containerd[1472]: time="2024-10-09T07:54:23.113402763Z" level=info msg="CreateContainer within sandbox \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 07:54:23.133557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318475110.mount: Deactivated successfully. Oct 9 07:54:23.137448 containerd[1472]: time="2024-10-09T07:54:23.137389480Z" level=info msg="CreateContainer within sandbox \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\"" Oct 9 07:54:23.139992 containerd[1472]: time="2024-10-09T07:54:23.139925386Z" level=info msg="StartContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\"" Oct 9 07:54:23.190564 systemd[1]: Started cri-containerd-1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6.scope - libcontainer container 1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6. Oct 9 07:54:23.228323 containerd[1472]: time="2024-10-09T07:54:23.228242779Z" level=info msg="StartContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" returns successfully" Oct 9 07:54:23.530978 kubelet[2486]: E1009 07:54:23.530849 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:23.537324 containerd[1472]: time="2024-10-09T07:54:23.536539717Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 07:54:23.538514 kubelet[2486]: E1009 07:54:23.538407 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:23.567182 containerd[1472]: time="2024-10-09T07:54:23.566456903Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\"" Oct 9 07:54:23.571211 containerd[1472]: time="2024-10-09T07:54:23.568038102Z" level=info msg="StartContainer for \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\"" Oct 9 07:54:23.634359 systemd[1]: Started cri-containerd-8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7.scope - libcontainer container 8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7. Oct 9 07:54:23.699143 containerd[1472]: time="2024-10-09T07:54:23.698014343Z" level=info msg="StartContainer for \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\" returns successfully" Oct 9 07:54:23.708928 systemd[1]: cri-containerd-8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7.scope: Deactivated successfully. Oct 9 07:54:23.758103 containerd[1472]: time="2024-10-09T07:54:23.757839311Z" level=info msg="shim disconnected" id=8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7 namespace=k8s.io Oct 9 07:54:23.758103 containerd[1472]: time="2024-10-09T07:54:23.758050863Z" level=warning msg="cleaning up after shim disconnected" id=8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7 namespace=k8s.io Oct 9 07:54:23.758103 containerd[1472]: time="2024-10-09T07:54:23.758061144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:54:24.544979 kubelet[2486]: E1009 07:54:24.544327 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:24.544979 kubelet[2486]: E1009 07:54:24.544340 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:24.546921 containerd[1472]: time="2024-10-09T07:54:24.546686994Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 07:54:24.571374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550760047.mount: Deactivated successfully. Oct 9 07:54:24.577217 containerd[1472]: time="2024-10-09T07:54:24.577018315Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\"" Oct 9 07:54:24.577838 kubelet[2486]: I1009 07:54:24.577574 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j7x8s" podStartSLOduration=2.244800329 podStartE2EDuration="12.577068217s" podCreationTimestamp="2024-10-09 07:54:12 +0000 UTC" firstStartedPulling="2024-10-09 07:54:12.774052301 +0000 UTC m=+7.504303825" lastFinishedPulling="2024-10-09 07:54:23.106320189 +0000 UTC m=+17.836571713" observedRunningTime="2024-10-09 07:54:23.678166559 +0000 UTC m=+18.408418095" watchObservedRunningTime="2024-10-09 07:54:24.577068217 +0000 UTC m=+19.307319760" Oct 9 07:54:24.578970 containerd[1472]: time="2024-10-09T07:54:24.578764880Z" level=info msg="StartContainer for \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\"" Oct 9 07:54:24.622516 systemd[1]: Started cri-containerd-bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a.scope - libcontainer container bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a. Oct 9 07:54:24.654998 systemd[1]: cri-containerd-bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a.scope: Deactivated successfully. Oct 9 07:54:24.658078 containerd[1472]: time="2024-10-09T07:54:24.658034720Z" level=info msg="StartContainer for \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\" returns successfully" Oct 9 07:54:24.693974 containerd[1472]: time="2024-10-09T07:54:24.693865905Z" level=info msg="shim disconnected" id=bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a namespace=k8s.io Oct 9 07:54:24.693974 containerd[1472]: time="2024-10-09T07:54:24.693974788Z" level=warning msg="cleaning up after shim disconnected" id=bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a namespace=k8s.io Oct 9 07:54:24.693974 containerd[1472]: time="2024-10-09T07:54:24.693985801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:54:25.091102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a-rootfs.mount: Deactivated successfully. Oct 9 07:54:25.549881 kubelet[2486]: E1009 07:54:25.549838 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:25.554429 containerd[1472]: time="2024-10-09T07:54:25.554387513Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 07:54:25.579593 containerd[1472]: time="2024-10-09T07:54:25.579539133Z" level=info msg="CreateContainer within sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\"" Oct 9 07:54:25.581090 containerd[1472]: time="2024-10-09T07:54:25.581055240Z" level=info msg="StartContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\"" Oct 9 07:54:25.621460 systemd[1]: Started cri-containerd-ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419.scope - libcontainer container ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419. Oct 9 07:54:25.666554 containerd[1472]: time="2024-10-09T07:54:25.666249050Z" level=info msg="StartContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" returns successfully" Oct 9 07:54:25.884183 kubelet[2486]: I1009 07:54:25.882735 2486 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 07:54:25.959914 systemd[1]: Created slice kubepods-burstable-pod53249d9e_e946_4736_a068_46ab39a322b6.slice - libcontainer container kubepods-burstable-pod53249d9e_e946_4736_a068_46ab39a322b6.slice. Oct 9 07:54:25.971731 systemd[1]: Created slice kubepods-burstable-pod62e4df40_334e_46a1_ad8e_ebbb5176b01e.slice - libcontainer container kubepods-burstable-pod62e4df40_334e_46a1_ad8e_ebbb5176b01e.slice. Oct 9 07:54:26.046384 kubelet[2486]: I1009 07:54:26.045979 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53249d9e-e946-4736-a068-46ab39a322b6-config-volume\") pod \"coredns-6f6b679f8f-z8qzr\" (UID: \"53249d9e-e946-4736-a068-46ab39a322b6\") " pod="kube-system/coredns-6f6b679f8f-z8qzr" Oct 9 07:54:26.046384 kubelet[2486]: I1009 07:54:26.046064 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2cbx\" (UniqueName: \"kubernetes.io/projected/53249d9e-e946-4736-a068-46ab39a322b6-kube-api-access-g2cbx\") pod \"coredns-6f6b679f8f-z8qzr\" (UID: \"53249d9e-e946-4736-a068-46ab39a322b6\") " pod="kube-system/coredns-6f6b679f8f-z8qzr" Oct 9 07:54:26.150177 kubelet[2486]: I1009 07:54:26.146849 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtzx\" (UniqueName: \"kubernetes.io/projected/62e4df40-334e-46a1-ad8e-ebbb5176b01e-kube-api-access-xjtzx\") pod \"coredns-6f6b679f8f-4dnnr\" (UID: \"62e4df40-334e-46a1-ad8e-ebbb5176b01e\") " pod="kube-system/coredns-6f6b679f8f-4dnnr" Oct 9 07:54:26.150177 kubelet[2486]: I1009 07:54:26.147085 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62e4df40-334e-46a1-ad8e-ebbb5176b01e-config-volume\") pod \"coredns-6f6b679f8f-4dnnr\" (UID: \"62e4df40-334e-46a1-ad8e-ebbb5176b01e\") " pod="kube-system/coredns-6f6b679f8f-4dnnr" Oct 9 07:54:26.267470 kubelet[2486]: E1009 07:54:26.267414 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:26.272940 containerd[1472]: time="2024-10-09T07:54:26.272873538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8qzr,Uid:53249d9e-e946-4736-a068-46ab39a322b6,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:26.277313 kubelet[2486]: E1009 07:54:26.277068 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:26.285104 containerd[1472]: time="2024-10-09T07:54:26.285050293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dnnr,Uid:62e4df40-334e-46a1-ad8e-ebbb5176b01e,Namespace:kube-system,Attempt:0,}" Oct 9 07:54:26.557085 kubelet[2486]: E1009 07:54:26.557033 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:27.559900 kubelet[2486]: E1009 07:54:27.559732 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:28.058666 systemd-networkd[1371]: cilium_host: Link UP Oct 9 07:54:28.061217 systemd-networkd[1371]: cilium_net: Link UP Oct 9 07:54:28.062579 systemd-networkd[1371]: cilium_net: Gained carrier Oct 9 07:54:28.063792 systemd-networkd[1371]: cilium_host: Gained carrier Oct 9 07:54:28.210704 systemd-networkd[1371]: cilium_vxlan: Link UP Oct 9 07:54:28.210714 systemd-networkd[1371]: cilium_vxlan: Gained carrier Oct 9 07:54:28.510389 systemd-networkd[1371]: cilium_host: Gained IPv6LL Oct 9 07:54:28.561746 kubelet[2486]: E1009 07:54:28.561705 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:28.643356 kernel: NET: Registered PF_ALG protocol family Oct 9 07:54:28.878485 systemd-networkd[1371]: cilium_net: Gained IPv6LL Oct 9 07:54:29.468513 systemd-networkd[1371]: lxc_health: Link UP Oct 9 07:54:29.479468 systemd-networkd[1371]: lxc_health: Gained carrier Oct 9 07:54:29.710330 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Oct 9 07:54:29.878085 systemd-networkd[1371]: lxcd26788c32066: Link UP Oct 9 07:54:29.883197 kernel: eth0: renamed from tmp2e1a4 Oct 9 07:54:29.889636 systemd-networkd[1371]: lxcd26788c32066: Gained carrier Oct 9 07:54:29.924455 systemd-networkd[1371]: lxcd2ecce69ae77: Link UP Oct 9 07:54:29.931260 kernel: eth0: renamed from tmp12d21 Oct 9 07:54:29.936397 systemd-networkd[1371]: lxcd2ecce69ae77: Gained carrier Oct 9 07:54:30.434439 kubelet[2486]: E1009 07:54:30.433969 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:30.461367 kubelet[2486]: I1009 07:54:30.460757 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pxj6b" podStartSLOduration=10.153075624 podStartE2EDuration="18.46073805s" podCreationTimestamp="2024-10-09 07:54:12 +0000 UTC" firstStartedPulling="2024-10-09 07:54:12.647673316 +0000 UTC m=+7.377924835" lastFinishedPulling="2024-10-09 07:54:20.955335724 +0000 UTC m=+15.685587261" observedRunningTime="2024-10-09 07:54:26.580314566 +0000 UTC m=+21.310566106" watchObservedRunningTime="2024-10-09 07:54:30.46073805 +0000 UTC m=+25.190989589" Oct 9 07:54:30.926520 systemd-networkd[1371]: lxcd26788c32066: Gained IPv6LL Oct 9 07:54:31.121808 systemd-networkd[1371]: lxcd2ecce69ae77: Gained IPv6LL Oct 9 07:54:31.502339 systemd-networkd[1371]: lxc_health: Gained IPv6LL Oct 9 07:54:34.320157 containerd[1472]: time="2024-10-09T07:54:34.314192687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:34.320157 containerd[1472]: time="2024-10-09T07:54:34.314277053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:34.320157 containerd[1472]: time="2024-10-09T07:54:34.314293285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:34.320157 containerd[1472]: time="2024-10-09T07:54:34.314513819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:34.338132 containerd[1472]: time="2024-10-09T07:54:34.334272747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:34.338132 containerd[1472]: time="2024-10-09T07:54:34.334539833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:34.338132 containerd[1472]: time="2024-10-09T07:54:34.334555008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:34.338132 containerd[1472]: time="2024-10-09T07:54:34.334894911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:34.382459 systemd[1]: Started cri-containerd-12d21d727a221832ed2b47cd001146b42027da98304b221fe5a13bb7558ef4a9.scope - libcontainer container 12d21d727a221832ed2b47cd001146b42027da98304b221fe5a13bb7558ef4a9. Oct 9 07:54:34.402432 systemd[1]: Started cri-containerd-2e1a4dce67b106e59fa8c6fd1fec74b603573586f994a0ad1aaa7d339e995a66.scope - libcontainer container 2e1a4dce67b106e59fa8c6fd1fec74b603573586f994a0ad1aaa7d339e995a66. Oct 9 07:54:34.476866 containerd[1472]: time="2024-10-09T07:54:34.476761730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8qzr,Uid:53249d9e-e946-4736-a068-46ab39a322b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"12d21d727a221832ed2b47cd001146b42027da98304b221fe5a13bb7558ef4a9\"" Oct 9 07:54:34.479379 kubelet[2486]: E1009 07:54:34.479322 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:34.490729 containerd[1472]: time="2024-10-09T07:54:34.488585099Z" level=info msg="CreateContainer within sandbox \"12d21d727a221832ed2b47cd001146b42027da98304b221fe5a13bb7558ef4a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:54:34.506895 containerd[1472]: time="2024-10-09T07:54:34.506827078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dnnr,Uid:62e4df40-334e-46a1-ad8e-ebbb5176b01e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e1a4dce67b106e59fa8c6fd1fec74b603573586f994a0ad1aaa7d339e995a66\"" Oct 9 07:54:34.508619 kubelet[2486]: E1009 07:54:34.508586 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:34.514032 containerd[1472]: time="2024-10-09T07:54:34.513993586Z" level=info msg="CreateContainer within sandbox \"2e1a4dce67b106e59fa8c6fd1fec74b603573586f994a0ad1aaa7d339e995a66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:54:34.554490 containerd[1472]: time="2024-10-09T07:54:34.554421338Z" level=info msg="CreateContainer within sandbox \"12d21d727a221832ed2b47cd001146b42027da98304b221fe5a13bb7558ef4a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"156c2184f3766837f77c72dc239c852d02f84ddbde93bdba6837b19dd5b57699\"" Oct 9 07:54:34.556446 containerd[1472]: time="2024-10-09T07:54:34.556330048Z" level=info msg="StartContainer for \"156c2184f3766837f77c72dc239c852d02f84ddbde93bdba6837b19dd5b57699\"" Oct 9 07:54:34.560877 containerd[1472]: time="2024-10-09T07:54:34.560827037Z" level=info msg="CreateContainer within sandbox \"2e1a4dce67b106e59fa8c6fd1fec74b603573586f994a0ad1aaa7d339e995a66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee8a993ce41ffcc6974079b6a57d760086ba78e03054e224d366fd085b1e08a\"" Oct 9 07:54:34.563374 containerd[1472]: time="2024-10-09T07:54:34.563334245Z" level=info msg="StartContainer for \"bee8a993ce41ffcc6974079b6a57d760086ba78e03054e224d366fd085b1e08a\"" Oct 9 07:54:34.648362 systemd[1]: Started cri-containerd-156c2184f3766837f77c72dc239c852d02f84ddbde93bdba6837b19dd5b57699.scope - libcontainer container 156c2184f3766837f77c72dc239c852d02f84ddbde93bdba6837b19dd5b57699. Oct 9 07:54:34.656377 systemd[1]: Started cri-containerd-bee8a993ce41ffcc6974079b6a57d760086ba78e03054e224d366fd085b1e08a.scope - libcontainer container bee8a993ce41ffcc6974079b6a57d760086ba78e03054e224d366fd085b1e08a. Oct 9 07:54:34.718433 containerd[1472]: time="2024-10-09T07:54:34.717994526Z" level=info msg="StartContainer for \"156c2184f3766837f77c72dc239c852d02f84ddbde93bdba6837b19dd5b57699\" returns successfully" Oct 9 07:54:34.718433 containerd[1472]: time="2024-10-09T07:54:34.718051885Z" level=info msg="StartContainer for \"bee8a993ce41ffcc6974079b6a57d760086ba78e03054e224d366fd085b1e08a\" returns successfully" Oct 9 07:54:35.616555 kubelet[2486]: E1009 07:54:35.615153 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:35.620258 kubelet[2486]: E1009 07:54:35.619921 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:35.649948 kubelet[2486]: I1009 07:54:35.649805 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z8qzr" podStartSLOduration=23.649742256 podStartE2EDuration="23.649742256s" podCreationTimestamp="2024-10-09 07:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:35.646461658 +0000 UTC m=+30.376713197" watchObservedRunningTime="2024-10-09 07:54:35.649742256 +0000 UTC m=+30.379993799" Oct 9 07:54:35.737202 kubelet[2486]: I1009 07:54:35.736054 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4dnnr" podStartSLOduration=23.736033398 podStartE2EDuration="23.736033398s" podCreationTimestamp="2024-10-09 07:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:54:35.735066683 +0000 UTC m=+30.465318225" watchObservedRunningTime="2024-10-09 07:54:35.736033398 +0000 UTC m=+30.466284940" Oct 9 07:54:36.621872 kubelet[2486]: E1009 07:54:36.621673 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:36.621872 kubelet[2486]: E1009 07:54:36.621787 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:37.624948 kubelet[2486]: E1009 07:54:37.624205 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:37.624948 kubelet[2486]: E1009 07:54:37.624584 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:41.202709 kubelet[2486]: I1009 07:54:41.201818 2486 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:54:41.202709 kubelet[2486]: E1009 07:54:41.202390 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:41.633377 kubelet[2486]: E1009 07:54:41.633341 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:43.012612 systemd[1]: Started sshd@7-64.23.254.253:22-139.178.89.65:52014.service - OpenSSH per-connection server daemon (139.178.89.65:52014). Oct 9 07:54:43.085156 sshd[3884]: Accepted publickey for core from 139.178.89.65 port 52014 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:43.086872 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:43.093826 systemd-logind[1446]: New session 8 of user core. Oct 9 07:54:43.100416 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:54:43.692454 sshd[3884]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:43.697259 systemd[1]: sshd@7-64.23.254.253:22-139.178.89.65:52014.service: Deactivated successfully. Oct 9 07:54:43.700972 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:54:43.702293 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:54:43.703516 systemd-logind[1446]: Removed session 8. Oct 9 07:54:48.712540 systemd[1]: Started sshd@8-64.23.254.253:22-139.178.89.65:49264.service - OpenSSH per-connection server daemon (139.178.89.65:49264). Oct 9 07:54:48.755745 sshd[3900]: Accepted publickey for core from 139.178.89.65 port 49264 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:48.757738 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:48.763352 systemd-logind[1446]: New session 9 of user core. Oct 9 07:54:48.770385 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:54:48.909926 sshd[3900]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:48.915015 systemd[1]: sshd@8-64.23.254.253:22-139.178.89.65:49264.service: Deactivated successfully. Oct 9 07:54:48.917107 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:54:48.918008 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:54:48.919235 systemd-logind[1446]: Removed session 9. Oct 9 07:54:53.933359 systemd[1]: Started sshd@9-64.23.254.253:22-139.178.89.65:49270.service - OpenSSH per-connection server daemon (139.178.89.65:49270). Oct 9 07:54:53.972890 sshd[3913]: Accepted publickey for core from 139.178.89.65 port 49270 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:53.974593 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:53.980264 systemd-logind[1446]: New session 10 of user core. Oct 9 07:54:53.987626 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:54:54.120919 sshd[3913]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:54.125998 systemd[1]: sshd@9-64.23.254.253:22-139.178.89.65:49270.service: Deactivated successfully. Oct 9 07:54:54.129082 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:54:54.130698 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:54:54.131827 systemd-logind[1446]: Removed session 10. Oct 9 07:54:59.134686 systemd[1]: Started sshd@10-64.23.254.253:22-139.178.89.65:40508.service - OpenSSH per-connection server daemon (139.178.89.65:40508). Oct 9 07:54:59.190325 sshd[3927]: Accepted publickey for core from 139.178.89.65 port 40508 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:59.192047 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:59.197212 systemd-logind[1446]: New session 11 of user core. Oct 9 07:54:59.203571 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:54:59.333885 sshd[3927]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:59.343300 systemd[1]: sshd@10-64.23.254.253:22-139.178.89.65:40508.service: Deactivated successfully. Oct 9 07:54:59.345494 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:54:59.347204 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:54:59.355424 systemd[1]: Started sshd@11-64.23.254.253:22-139.178.89.65:40524.service - OpenSSH per-connection server daemon (139.178.89.65:40524). Oct 9 07:54:59.357026 systemd-logind[1446]: Removed session 11. Oct 9 07:54:59.393407 sshd[3941]: Accepted publickey for core from 139.178.89.65 port 40524 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:59.394912 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:59.402800 systemd-logind[1446]: New session 12 of user core. Oct 9 07:54:59.405336 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:54:59.586325 sshd[3941]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:59.599302 systemd[1]: sshd@11-64.23.254.253:22-139.178.89.65:40524.service: Deactivated successfully. Oct 9 07:54:59.603243 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:54:59.606955 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:54:59.617580 systemd[1]: Started sshd@12-64.23.254.253:22-139.178.89.65:40536.service - OpenSSH per-connection server daemon (139.178.89.65:40536). Oct 9 07:54:59.623162 systemd-logind[1446]: Removed session 12. Oct 9 07:54:59.662587 sshd[3952]: Accepted publickey for core from 139.178.89.65 port 40536 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:59.664375 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:59.669565 systemd-logind[1446]: New session 13 of user core. Oct 9 07:54:59.676352 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:54:59.808475 sshd[3952]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:59.813845 systemd[1]: sshd@12-64.23.254.253:22-139.178.89.65:40536.service: Deactivated successfully. Oct 9 07:54:59.817030 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:54:59.818259 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:54:59.819540 systemd-logind[1446]: Removed session 13. Oct 9 07:55:04.827761 systemd[1]: Started sshd@13-64.23.254.253:22-139.178.89.65:40546.service - OpenSSH per-connection server daemon (139.178.89.65:40546). Oct 9 07:55:04.882847 sshd[3964]: Accepted publickey for core from 139.178.89.65 port 40546 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:04.885089 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:04.893706 systemd-logind[1446]: New session 14 of user core. Oct 9 07:55:04.899712 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:55:05.062941 sshd[3964]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:05.068785 systemd[1]: sshd@13-64.23.254.253:22-139.178.89.65:40546.service: Deactivated successfully. Oct 9 07:55:05.071031 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:55:05.072017 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:55:05.073037 systemd-logind[1446]: Removed session 14. Oct 9 07:55:10.081553 systemd[1]: Started sshd@14-64.23.254.253:22-139.178.89.65:48690.service - OpenSSH per-connection server daemon (139.178.89.65:48690). Oct 9 07:55:10.136247 sshd[3979]: Accepted publickey for core from 139.178.89.65 port 48690 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:10.139029 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:10.146964 systemd-logind[1446]: New session 15 of user core. Oct 9 07:55:10.159826 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:55:10.310549 sshd[3979]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:10.316632 systemd[1]: sshd@14-64.23.254.253:22-139.178.89.65:48690.service: Deactivated successfully. Oct 9 07:55:10.319914 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:55:10.320758 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:55:10.321871 systemd-logind[1446]: Removed session 15. Oct 9 07:55:15.327484 systemd[1]: Started sshd@15-64.23.254.253:22-139.178.89.65:56016.service - OpenSSH per-connection server daemon (139.178.89.65:56016). Oct 9 07:55:15.388409 sshd[3994]: Accepted publickey for core from 139.178.89.65 port 56016 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:15.390284 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:15.395439 systemd-logind[1446]: New session 16 of user core. Oct 9 07:55:15.402428 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:55:15.561478 sshd[3994]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:15.573388 systemd[1]: sshd@15-64.23.254.253:22-139.178.89.65:56016.service: Deactivated successfully. Oct 9 07:55:15.576085 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:55:15.579357 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:55:15.594577 systemd[1]: Started sshd@16-64.23.254.253:22-139.178.89.65:56032.service - OpenSSH per-connection server daemon (139.178.89.65:56032). Oct 9 07:55:15.596554 systemd-logind[1446]: Removed session 16. Oct 9 07:55:15.634984 sshd[4007]: Accepted publickey for core from 139.178.89.65 port 56032 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:15.637022 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:15.643347 systemd-logind[1446]: New session 17 of user core. Oct 9 07:55:15.649391 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:55:15.948429 sshd[4007]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:15.968578 systemd[1]: Started sshd@17-64.23.254.253:22-139.178.89.65:56044.service - OpenSSH per-connection server daemon (139.178.89.65:56044). Oct 9 07:55:15.969630 systemd[1]: sshd@16-64.23.254.253:22-139.178.89.65:56032.service: Deactivated successfully. Oct 9 07:55:15.971823 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:55:15.976349 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:55:15.977443 systemd-logind[1446]: Removed session 17. Oct 9 07:55:16.036990 sshd[4016]: Accepted publickey for core from 139.178.89.65 port 56044 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:16.038812 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:16.045207 systemd-logind[1446]: New session 18 of user core. Oct 9 07:55:16.052378 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:55:17.680801 sshd[4016]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:17.698064 systemd[1]: sshd@17-64.23.254.253:22-139.178.89.65:56044.service: Deactivated successfully. Oct 9 07:55:17.704955 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:55:17.706333 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:55:17.710579 systemd-logind[1446]: Removed session 18. Oct 9 07:55:17.717662 systemd[1]: Started sshd@18-64.23.254.253:22-139.178.89.65:56046.service - OpenSSH per-connection server daemon (139.178.89.65:56046). Oct 9 07:55:17.782207 sshd[4035]: Accepted publickey for core from 139.178.89.65 port 56046 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:17.784001 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:17.790801 systemd-logind[1446]: New session 19 of user core. Oct 9 07:55:17.797403 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:55:18.217841 sshd[4035]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:18.233519 systemd[1]: sshd@18-64.23.254.253:22-139.178.89.65:56046.service: Deactivated successfully. Oct 9 07:55:18.237968 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:55:18.239221 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:55:18.247715 systemd[1]: Started sshd@19-64.23.254.253:22-139.178.89.65:56062.service - OpenSSH per-connection server daemon (139.178.89.65:56062). Oct 9 07:55:18.248786 systemd-logind[1446]: Removed session 19. Oct 9 07:55:18.293628 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 56062 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:18.295337 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:18.301376 systemd-logind[1446]: New session 20 of user core. Oct 9 07:55:18.309448 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:55:18.399155 kubelet[2486]: E1009 07:55:18.398875 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:18.470898 sshd[4047]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:18.476461 systemd[1]: sshd@19-64.23.254.253:22-139.178.89.65:56062.service: Deactivated successfully. Oct 9 07:55:18.480966 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:55:18.483544 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:55:18.485645 systemd-logind[1446]: Removed session 20. Oct 9 07:55:23.492528 systemd[1]: Started sshd@20-64.23.254.253:22-139.178.89.65:56070.service - OpenSSH per-connection server daemon (139.178.89.65:56070). Oct 9 07:55:23.531847 sshd[4060]: Accepted publickey for core from 139.178.89.65 port 56070 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:23.534474 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:23.540406 systemd-logind[1446]: New session 21 of user core. Oct 9 07:55:23.552443 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:55:23.678685 sshd[4060]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:23.683218 systemd[1]: sshd@20-64.23.254.253:22-139.178.89.65:56070.service: Deactivated successfully. Oct 9 07:55:23.685689 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:55:23.686738 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:55:23.688629 systemd-logind[1446]: Removed session 21. Oct 9 07:55:28.705636 systemd[1]: Started sshd@21-64.23.254.253:22-139.178.89.65:51396.service - OpenSSH per-connection server daemon (139.178.89.65:51396). Oct 9 07:55:28.750265 sshd[4076]: Accepted publickey for core from 139.178.89.65 port 51396 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:28.751833 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:28.758642 systemd-logind[1446]: New session 22 of user core. Oct 9 07:55:28.764403 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:55:28.920050 sshd[4076]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:28.925079 systemd[1]: sshd@21-64.23.254.253:22-139.178.89.65:51396.service: Deactivated successfully. Oct 9 07:55:28.927801 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:55:28.928638 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:55:28.930186 systemd-logind[1446]: Removed session 22. Oct 9 07:55:30.398773 kubelet[2486]: E1009 07:55:30.398713 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:33.944547 systemd[1]: Started sshd@22-64.23.254.253:22-139.178.89.65:51412.service - OpenSSH per-connection server daemon (139.178.89.65:51412). Oct 9 07:55:33.982240 sshd[4089]: Accepted publickey for core from 139.178.89.65 port 51412 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:33.986426 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:33.992905 systemd-logind[1446]: New session 23 of user core. Oct 9 07:55:34.004951 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:55:34.129166 sshd[4089]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:34.133376 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:55:34.134469 systemd[1]: sshd@22-64.23.254.253:22-139.178.89.65:51412.service: Deactivated successfully. Oct 9 07:55:34.137216 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:55:34.138286 systemd-logind[1446]: Removed session 23. Oct 9 07:55:34.398358 kubelet[2486]: E1009 07:55:34.398278 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:35.398547 kubelet[2486]: E1009 07:55:35.398169 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:39.149545 systemd[1]: Started sshd@23-64.23.254.253:22-139.178.89.65:53452.service - OpenSSH per-connection server daemon (139.178.89.65:53452). Oct 9 07:55:39.191351 sshd[4103]: Accepted publickey for core from 139.178.89.65 port 53452 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:39.193728 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:39.199570 systemd-logind[1446]: New session 24 of user core. Oct 9 07:55:39.212398 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:55:39.344473 sshd[4103]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:39.353937 systemd[1]: sshd@23-64.23.254.253:22-139.178.89.65:53452.service: Deactivated successfully. Oct 9 07:55:39.357656 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:55:39.359917 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:55:39.364524 systemd[1]: Started sshd@24-64.23.254.253:22-139.178.89.65:53456.service - OpenSSH per-connection server daemon (139.178.89.65:53456). Oct 9 07:55:39.365745 systemd-logind[1446]: Removed session 24. Oct 9 07:55:39.408687 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 53456 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:39.409968 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:39.415181 systemd-logind[1446]: New session 25 of user core. Oct 9 07:55:39.425417 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:55:40.398493 kubelet[2486]: E1009 07:55:40.398441 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:40.833912 containerd[1472]: time="2024-10-09T07:55:40.829694526Z" level=info msg="StopContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" with timeout 30 (s)" Oct 9 07:55:40.834556 containerd[1472]: time="2024-10-09T07:55:40.834524451Z" level=info msg="Stop container \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" with signal terminated" Oct 9 07:55:40.888313 containerd[1472]: time="2024-10-09T07:55:40.888176002Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:55:40.905509 containerd[1472]: time="2024-10-09T07:55:40.905346205Z" level=info msg="StopContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" with timeout 2 (s)" Oct 9 07:55:40.906343 containerd[1472]: time="2024-10-09T07:55:40.906307313Z" level=info msg="Stop container \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" with signal terminated" Oct 9 07:55:40.921070 systemd-networkd[1371]: lxc_health: Link DOWN Oct 9 07:55:40.921164 systemd-networkd[1371]: lxc_health: Lost carrier Oct 9 07:55:40.956150 systemd[1]: cri-containerd-1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6.scope: Deactivated successfully. Oct 9 07:55:40.974486 systemd[1]: cri-containerd-ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419.scope: Deactivated successfully. Oct 9 07:55:40.975646 systemd[1]: cri-containerd-ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419.scope: Consumed 8.020s CPU time. Oct 9 07:55:41.000788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6-rootfs.mount: Deactivated successfully. Oct 9 07:55:41.010594 containerd[1472]: time="2024-10-09T07:55:41.010497847Z" level=info msg="shim disconnected" id=1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6 namespace=k8s.io Oct 9 07:55:41.010814 containerd[1472]: time="2024-10-09T07:55:41.010589360Z" level=warning msg="cleaning up after shim disconnected" id=1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6 namespace=k8s.io Oct 9 07:55:41.010814 containerd[1472]: time="2024-10-09T07:55:41.010633793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:41.011476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419-rootfs.mount: Deactivated successfully. Oct 9 07:55:41.012553 containerd[1472]: time="2024-10-09T07:55:41.011840048Z" level=info msg="shim disconnected" id=ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419 namespace=k8s.io Oct 9 07:55:41.012553 containerd[1472]: time="2024-10-09T07:55:41.011893266Z" level=warning msg="cleaning up after shim disconnected" id=ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419 namespace=k8s.io Oct 9 07:55:41.012553 containerd[1472]: time="2024-10-09T07:55:41.011904924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:41.040233 containerd[1472]: time="2024-10-09T07:55:41.040160800Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:55:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:55:41.045011 containerd[1472]: time="2024-10-09T07:55:41.044956918Z" level=info msg="StopContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" returns successfully" Oct 9 07:55:41.046181 containerd[1472]: time="2024-10-09T07:55:41.045812577Z" level=info msg="StopPodSandbox for \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\"" Oct 9 07:55:41.053167 containerd[1472]: time="2024-10-09T07:55:41.051215582Z" level=info msg="Container to stop \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.055491 containerd[1472]: time="2024-10-09T07:55:41.055444198Z" level=info msg="StopContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" returns successfully" Oct 9 07:55:41.056248 containerd[1472]: time="2024-10-09T07:55:41.056215528Z" level=info msg="StopPodSandbox for \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\"" Oct 9 07:55:41.056362 containerd[1472]: time="2024-10-09T07:55:41.056260394Z" level=info msg="Container to stop \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.056362 containerd[1472]: time="2024-10-09T07:55:41.056281610Z" level=info msg="Container to stop \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.056362 containerd[1472]: time="2024-10-09T07:55:41.056295915Z" level=info msg="Container to stop \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.056362 containerd[1472]: time="2024-10-09T07:55:41.056308948Z" level=info msg="Container to stop \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.056362 containerd[1472]: time="2024-10-09T07:55:41.056321263Z" level=info msg="Container to stop \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 07:55:41.056557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0-shm.mount: Deactivated successfully. Oct 9 07:55:41.061029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1-shm.mount: Deactivated successfully. Oct 9 07:55:41.072256 systemd[1]: cri-containerd-348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1.scope: Deactivated successfully. Oct 9 07:55:41.074184 systemd[1]: cri-containerd-33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0.scope: Deactivated successfully. Oct 9 07:55:41.113302 containerd[1472]: time="2024-10-09T07:55:41.113166886Z" level=info msg="shim disconnected" id=348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1 namespace=k8s.io Oct 9 07:55:41.115811 containerd[1472]: time="2024-10-09T07:55:41.115766710Z" level=warning msg="cleaning up after shim disconnected" id=348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1 namespace=k8s.io Oct 9 07:55:41.116302 containerd[1472]: time="2024-10-09T07:55:41.115964502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:41.121240 containerd[1472]: time="2024-10-09T07:55:41.121098137Z" level=info msg="shim disconnected" id=33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0 namespace=k8s.io Oct 9 07:55:41.121240 containerd[1472]: time="2024-10-09T07:55:41.121178984Z" level=warning msg="cleaning up after shim disconnected" id=33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0 namespace=k8s.io Oct 9 07:55:41.121240 containerd[1472]: time="2024-10-09T07:55:41.121189754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:41.149265 containerd[1472]: time="2024-10-09T07:55:41.148528843Z" level=info msg="TearDown network for sandbox \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" successfully" Oct 9 07:55:41.149265 containerd[1472]: time="2024-10-09T07:55:41.148590682Z" level=info msg="StopPodSandbox for \"348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1\" returns successfully" Oct 9 07:55:41.150322 containerd[1472]: time="2024-10-09T07:55:41.150040077Z" level=info msg="TearDown network for sandbox \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\" successfully" Oct 9 07:55:41.150322 containerd[1472]: time="2024-10-09T07:55:41.150088377Z" level=info msg="StopPodSandbox for \"33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0\" returns successfully" Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171214 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-xtables-lock\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171277 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2b166eb-fdeb-4063-8549-a32548acf95a-cilium-config-path\") pod \"f2b166eb-fdeb-4063-8549-a32548acf95a\" (UID: \"f2b166eb-fdeb-4063-8549-a32548acf95a\") " Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171307 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-etc-cni-netd\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171327 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-bpf-maps\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171341 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-net\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172264 kubelet[2486]: I1009 07:55:41.171373 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-cgroup\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171388 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-kernel\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171407 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt72j\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-kube-api-access-tt72j\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171421 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-lib-modules\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171436 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-hubble-tls\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171453 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-config-path\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.172582 kubelet[2486]: I1009 07:55:41.171467 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-run\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.174067 kubelet[2486]: I1009 07:55:41.171484 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82ef0ccc-a252-48be-ba53-b5308961843a-clustermesh-secrets\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.174067 kubelet[2486]: I1009 07:55:41.171572 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.174067 kubelet[2486]: I1009 07:55:41.171618 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.174067 kubelet[2486]: I1009 07:55:41.172687 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-hostproc\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.174067 kubelet[2486]: I1009 07:55:41.172745 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvql5\" (UniqueName: \"kubernetes.io/projected/f2b166eb-fdeb-4063-8549-a32548acf95a-kube-api-access-nvql5\") pod \"f2b166eb-fdeb-4063-8549-a32548acf95a\" (UID: \"f2b166eb-fdeb-4063-8549-a32548acf95a\") " Oct 9 07:55:41.174667 kubelet[2486]: I1009 07:55:41.172800 2486 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cni-path\") pod \"82ef0ccc-a252-48be-ba53-b5308961843a\" (UID: \"82ef0ccc-a252-48be-ba53-b5308961843a\") " Oct 9 07:55:41.174667 kubelet[2486]: I1009 07:55:41.172867 2486 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-xtables-lock\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.174667 kubelet[2486]: I1009 07:55:41.172936 2486 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-kernel\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.174667 kubelet[2486]: I1009 07:55:41.172976 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cni-path" (OuterVolumeSpecName: "cni-path") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.175954 kubelet[2486]: I1009 07:55:41.175748 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b166eb-fdeb-4063-8549-a32548acf95a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2b166eb-fdeb-4063-8549-a32548acf95a" (UID: "f2b166eb-fdeb-4063-8549-a32548acf95a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 07:55:41.175954 kubelet[2486]: I1009 07:55:41.175846 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.175954 kubelet[2486]: I1009 07:55:41.175881 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.175954 kubelet[2486]: I1009 07:55:41.175897 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.175954 kubelet[2486]: I1009 07:55:41.175911 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.176572 kubelet[2486]: I1009 07:55:41.176550 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.181267 kubelet[2486]: I1009 07:55:41.180371 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-hostproc" (OuterVolumeSpecName: "hostproc") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.181267 kubelet[2486]: I1009 07:55:41.181238 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 07:55:41.185501 kubelet[2486]: I1009 07:55:41.185378 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-kube-api-access-tt72j" (OuterVolumeSpecName: "kube-api-access-tt72j") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "kube-api-access-tt72j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 07:55:41.188463 kubelet[2486]: I1009 07:55:41.188415 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 07:55:41.189642 kubelet[2486]: I1009 07:55:41.189611 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef0ccc-a252-48be-ba53-b5308961843a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 07:55:41.191022 kubelet[2486]: I1009 07:55:41.190984 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b166eb-fdeb-4063-8549-a32548acf95a-kube-api-access-nvql5" (OuterVolumeSpecName: "kube-api-access-nvql5") pod "f2b166eb-fdeb-4063-8549-a32548acf95a" (UID: "f2b166eb-fdeb-4063-8549-a32548acf95a"). InnerVolumeSpecName "kube-api-access-nvql5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 07:55:41.192896 kubelet[2486]: I1009 07:55:41.192838 2486 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "82ef0ccc-a252-48be-ba53-b5308961843a" (UID: "82ef0ccc-a252-48be-ba53-b5308961843a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 07:55:41.273598 kubelet[2486]: I1009 07:55:41.273546 2486 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-host-proc-sys-net\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.273810 kubelet[2486]: I1009 07:55:41.273796 2486 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-cgroup\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.273891 kubelet[2486]: I1009 07:55:41.273879 2486 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tt72j\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-kube-api-access-tt72j\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.273964 kubelet[2486]: I1009 07:55:41.273955 2486 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-lib-modules\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274058 kubelet[2486]: I1009 07:55:41.274011 2486 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82ef0ccc-a252-48be-ba53-b5308961843a-hubble-tls\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274153 kubelet[2486]: I1009 07:55:41.274145 2486 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-config-path\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274239 kubelet[2486]: I1009 07:55:41.274228 2486 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cilium-run\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274304 kubelet[2486]: I1009 07:55:41.274296 2486 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82ef0ccc-a252-48be-ba53-b5308961843a-clustermesh-secrets\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274374 kubelet[2486]: I1009 07:55:41.274355 2486 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-hostproc\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274435 kubelet[2486]: I1009 07:55:41.274422 2486 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nvql5\" (UniqueName: \"kubernetes.io/projected/f2b166eb-fdeb-4063-8549-a32548acf95a-kube-api-access-nvql5\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274495 kubelet[2486]: I1009 07:55:41.274487 2486 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-cni-path\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274543 kubelet[2486]: I1009 07:55:41.274536 2486 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-etc-cni-netd\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274599 kubelet[2486]: I1009 07:55:41.274592 2486 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82ef0ccc-a252-48be-ba53-b5308961843a-bpf-maps\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.274647 kubelet[2486]: I1009 07:55:41.274640 2486 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2b166eb-fdeb-4063-8549-a32548acf95a-cilium-config-path\") on node \"ci-4081.1.0-4-ec1af0061e\" DevicePath \"\"" Oct 9 07:55:41.405688 systemd[1]: Removed slice kubepods-burstable-pod82ef0ccc_a252_48be_ba53_b5308961843a.slice - libcontainer container kubepods-burstable-pod82ef0ccc_a252_48be_ba53_b5308961843a.slice. Oct 9 07:55:41.406173 systemd[1]: kubepods-burstable-pod82ef0ccc_a252_48be_ba53_b5308961843a.slice: Consumed 8.122s CPU time. Oct 9 07:55:41.409697 systemd[1]: Removed slice kubepods-besteffort-podf2b166eb_fdeb_4063_8549_a32548acf95a.slice - libcontainer container kubepods-besteffort-podf2b166eb_fdeb_4063_8549_a32548acf95a.slice. Oct 9 07:55:41.793733 kubelet[2486]: I1009 07:55:41.792460 2486 scope.go:117] "RemoveContainer" containerID="1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6" Oct 9 07:55:41.796357 containerd[1472]: time="2024-10-09T07:55:41.796254917Z" level=info msg="RemoveContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\"" Oct 9 07:55:41.814999 containerd[1472]: time="2024-10-09T07:55:41.814385185Z" level=info msg="RemoveContainer for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" returns successfully" Oct 9 07:55:41.815176 kubelet[2486]: I1009 07:55:41.814813 2486 scope.go:117] "RemoveContainer" containerID="1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6" Oct 9 07:55:41.826144 containerd[1472]: time="2024-10-09T07:55:41.817318302Z" level=error msg="ContainerStatus for \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\": not found" Oct 9 07:55:41.832240 kubelet[2486]: E1009 07:55:41.832178 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\": not found" containerID="1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6" Oct 9 07:55:41.832415 kubelet[2486]: I1009 07:55:41.832256 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6"} err="failed to get container status \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dcdc04b4c07184e556b2b51186917f054da78980ab1276a1b0fdbdabd100bc6\": not found" Oct 9 07:55:41.832415 kubelet[2486]: I1009 07:55:41.832371 2486 scope.go:117] "RemoveContainer" containerID="ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419" Oct 9 07:55:41.834255 containerd[1472]: time="2024-10-09T07:55:41.834212650Z" level=info msg="RemoveContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\"" Oct 9 07:55:41.841532 containerd[1472]: time="2024-10-09T07:55:41.840729700Z" level=info msg="RemoveContainer for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" returns successfully" Oct 9 07:55:41.841701 kubelet[2486]: I1009 07:55:41.841097 2486 scope.go:117] "RemoveContainer" containerID="bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a" Oct 9 07:55:41.843475 containerd[1472]: time="2024-10-09T07:55:41.842978978Z" level=info msg="RemoveContainer for \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\"" Oct 9 07:55:41.848505 containerd[1472]: time="2024-10-09T07:55:41.848467510Z" level=info msg="RemoveContainer for \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\" returns successfully" Oct 9 07:55:41.849217 kubelet[2486]: I1009 07:55:41.849046 2486 scope.go:117] "RemoveContainer" containerID="8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7" Oct 9 07:55:41.855695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33d13f9df509aa62c0ee0242feab69cc9e2aaaec204c7fe5aa173fac227f4bb0-rootfs.mount: Deactivated successfully. Oct 9 07:55:41.857359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-348a012dd991e66a759c51cb4d1bc5d59e27142a95e859de50729cb0d9f868d1-rootfs.mount: Deactivated successfully. Oct 9 07:55:41.857685 systemd[1]: var-lib-kubelet-pods-f2b166eb\x2dfdeb\x2d4063\x2d8549\x2da32548acf95a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvql5.mount: Deactivated successfully. Oct 9 07:55:41.857754 systemd[1]: var-lib-kubelet-pods-82ef0ccc\x2da252\x2d48be\x2dba53\x2db5308961843a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtt72j.mount: Deactivated successfully. Oct 9 07:55:41.857814 systemd[1]: var-lib-kubelet-pods-82ef0ccc\x2da252\x2d48be\x2dba53\x2db5308961843a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 07:55:41.857872 systemd[1]: var-lib-kubelet-pods-82ef0ccc\x2da252\x2d48be\x2dba53\x2db5308961843a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 07:55:41.862924 containerd[1472]: time="2024-10-09T07:55:41.861711974Z" level=info msg="RemoveContainer for \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\"" Oct 9 07:55:41.866780 containerd[1472]: time="2024-10-09T07:55:41.866713728Z" level=info msg="RemoveContainer for \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\" returns successfully" Oct 9 07:55:41.867420 kubelet[2486]: I1009 07:55:41.867206 2486 scope.go:117] "RemoveContainer" containerID="46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea" Oct 9 07:55:41.868794 containerd[1472]: time="2024-10-09T07:55:41.868467297Z" level=info msg="RemoveContainer for \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\"" Oct 9 07:55:41.871152 containerd[1472]: time="2024-10-09T07:55:41.871053265Z" level=info msg="RemoveContainer for \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\" returns successfully" Oct 9 07:55:41.871619 kubelet[2486]: I1009 07:55:41.871575 2486 scope.go:117] "RemoveContainer" containerID="9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c" Oct 9 07:55:41.872811 containerd[1472]: time="2024-10-09T07:55:41.872764189Z" level=info msg="RemoveContainer for \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\"" Oct 9 07:55:41.875667 containerd[1472]: time="2024-10-09T07:55:41.875592298Z" level=info msg="RemoveContainer for \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\" returns successfully" Oct 9 07:55:41.876296 kubelet[2486]: I1009 07:55:41.876101 2486 scope.go:117] "RemoveContainer" containerID="ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419" Oct 9 07:55:41.876611 containerd[1472]: time="2024-10-09T07:55:41.876573595Z" level=error msg="ContainerStatus for \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\": not found" Oct 9 07:55:41.876905 kubelet[2486]: E1009 07:55:41.876766 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\": not found" containerID="ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419" Oct 9 07:55:41.876905 kubelet[2486]: I1009 07:55:41.876796 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419"} err="failed to get container status \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca36be4002a1b4861a23d75cee029688823e234759dde838cfda1ab75f76a419\": not found" Oct 9 07:55:41.876905 kubelet[2486]: I1009 07:55:41.876827 2486 scope.go:117] "RemoveContainer" containerID="bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a" Oct 9 07:55:41.877466 containerd[1472]: time="2024-10-09T07:55:41.877220648Z" level=error msg="ContainerStatus for \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\": not found" Oct 9 07:55:41.877535 kubelet[2486]: E1009 07:55:41.877414 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\": not found" containerID="bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a" Oct 9 07:55:41.877535 kubelet[2486]: I1009 07:55:41.877434 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a"} err="failed to get container status \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfeee3dd984b6c1e6821f89b4cf5852b529935af0e55ff199c00c918e6ee5f6a\": not found" Oct 9 07:55:41.877679 kubelet[2486]: I1009 07:55:41.877452 2486 scope.go:117] "RemoveContainer" containerID="8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7" Oct 9 07:55:41.877838 containerd[1472]: time="2024-10-09T07:55:41.877777103Z" level=error msg="ContainerStatus for \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\": not found" Oct 9 07:55:41.878075 kubelet[2486]: E1009 07:55:41.877965 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\": not found" containerID="8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7" Oct 9 07:55:41.878075 kubelet[2486]: I1009 07:55:41.878013 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7"} err="failed to get container status \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8210c90d60a786d52055a273a57e7c0a61ec3098356a78e637c4f1f6a1c5c2d7\": not found" Oct 9 07:55:41.878075 kubelet[2486]: I1009 07:55:41.878036 2486 scope.go:117] "RemoveContainer" containerID="46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea" Oct 9 07:55:41.878413 containerd[1472]: time="2024-10-09T07:55:41.878364072Z" level=error msg="ContainerStatus for \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\": not found" Oct 9 07:55:41.878633 kubelet[2486]: E1009 07:55:41.878527 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\": not found" containerID="46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea" Oct 9 07:55:41.878633 kubelet[2486]: I1009 07:55:41.878549 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea"} err="failed to get container status \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"46269b497f60b6e23c705e15142ab8f6624d2b852aa14bd9427deda617a5a4ea\": not found" Oct 9 07:55:41.878633 kubelet[2486]: I1009 07:55:41.878564 2486 scope.go:117] "RemoveContainer" containerID="9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c" Oct 9 07:55:41.878892 kubelet[2486]: E1009 07:55:41.878835 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\": not found" containerID="9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c" Oct 9 07:55:41.878946 containerd[1472]: time="2024-10-09T07:55:41.878714629Z" level=error msg="ContainerStatus for \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\": not found" Oct 9 07:55:41.878980 kubelet[2486]: I1009 07:55:41.878901 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c"} err="failed to get container status \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dfddcf158901da27dd28877a6446221f92af3d72468bd88e00ced9fb624215c\": not found" Oct 9 07:55:42.398188 kubelet[2486]: E1009 07:55:42.398149 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:42.749800 sshd[4115]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:42.763769 systemd[1]: sshd@24-64.23.254.253:22-139.178.89.65:53456.service: Deactivated successfully. Oct 9 07:55:42.767581 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:55:42.770045 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:55:42.777662 systemd[1]: Started sshd@25-64.23.254.253:22-139.178.89.65:53460.service - OpenSSH per-connection server daemon (139.178.89.65:53460). Oct 9 07:55:42.780170 systemd-logind[1446]: Removed session 25. Oct 9 07:55:42.835754 sshd[4278]: Accepted publickey for core from 139.178.89.65 port 53460 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:42.837879 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:42.843002 systemd-logind[1446]: New session 26 of user core. Oct 9 07:55:42.852455 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:55:43.404778 kubelet[2486]: I1009 07:55:43.404735 2486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" path="/var/lib/kubelet/pods/82ef0ccc-a252-48be-ba53-b5308961843a/volumes" Oct 9 07:55:43.405469 kubelet[2486]: I1009 07:55:43.405440 2486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b166eb-fdeb-4063-8549-a32548acf95a" path="/var/lib/kubelet/pods/f2b166eb-fdeb-4063-8549-a32548acf95a/volumes" Oct 9 07:55:43.409455 sshd[4278]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:43.423000 systemd[1]: sshd@25-64.23.254.253:22-139.178.89.65:53460.service: Deactivated successfully. Oct 9 07:55:43.426319 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:55:43.429222 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:55:43.433636 systemd[1]: Started sshd@26-64.23.254.253:22-139.178.89.65:53472.service - OpenSSH per-connection server daemon (139.178.89.65:53472). Oct 9 07:55:43.438203 systemd-logind[1446]: Removed session 26. Oct 9 07:55:43.487813 sshd[4291]: Accepted publickey for core from 139.178.89.65 port 53472 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:43.492113 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:43.502618 systemd-logind[1446]: New session 27 of user core. Oct 9 07:55:43.508412 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508873 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="apply-sysctl-overwrites" Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508904 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2b166eb-fdeb-4063-8549-a32548acf95a" containerName="cilium-operator" Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508911 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="mount-bpf-fs" Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508918 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="clean-cilium-state" Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508924 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="cilium-agent" Oct 9 07:55:43.511274 kubelet[2486]: E1009 07:55:43.508930 2486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="mount-cgroup" Oct 9 07:55:43.511274 kubelet[2486]: I1009 07:55:43.508956 2486 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b166eb-fdeb-4063-8549-a32548acf95a" containerName="cilium-operator" Oct 9 07:55:43.511274 kubelet[2486]: I1009 07:55:43.508963 2486 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ef0ccc-a252-48be-ba53-b5308961843a" containerName="cilium-agent" Oct 9 07:55:43.538409 systemd[1]: Created slice kubepods-burstable-pod05f8be6c_66b1_49ef_b905_f21cdcee57af.slice - libcontainer container kubepods-burstable-pod05f8be6c_66b1_49ef_b905_f21cdcee57af.slice. Oct 9 07:55:43.585500 sshd[4291]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:43.590222 kubelet[2486]: I1009 07:55:43.590145 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05f8be6c-66b1-49ef-b905-f21cdcee57af-cilium-ipsec-secrets\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590222 kubelet[2486]: I1009 07:55:43.590192 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-cilium-cgroup\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590222 kubelet[2486]: I1009 07:55:43.590215 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-cilium-run\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590222 kubelet[2486]: I1009 07:55:43.590233 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-xtables-lock\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590251 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-etc-cni-netd\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590269 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05f8be6c-66b1-49ef-b905-f21cdcee57af-clustermesh-secrets\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590292 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-host-proc-sys-kernel\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590307 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-cni-path\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590322 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-bpf-maps\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590611 kubelet[2486]: I1009 07:55:43.590335 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-lib-modules\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590768 kubelet[2486]: I1009 07:55:43.590351 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05f8be6c-66b1-49ef-b905-f21cdcee57af-cilium-config-path\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590768 kubelet[2486]: I1009 07:55:43.590366 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm5d\" (UniqueName: \"kubernetes.io/projected/05f8be6c-66b1-49ef-b905-f21cdcee57af-kube-api-access-9lm5d\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590768 kubelet[2486]: I1009 07:55:43.590425 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-hostproc\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590768 kubelet[2486]: I1009 07:55:43.590479 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05f8be6c-66b1-49ef-b905-f21cdcee57af-host-proc-sys-net\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.590768 kubelet[2486]: I1009 07:55:43.590502 2486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05f8be6c-66b1-49ef-b905-f21cdcee57af-hubble-tls\") pod \"cilium-m6mw6\" (UID: \"05f8be6c-66b1-49ef-b905-f21cdcee57af\") " pod="kube-system/cilium-m6mw6" Oct 9 07:55:43.596477 systemd[1]: sshd@26-64.23.254.253:22-139.178.89.65:53472.service: Deactivated successfully. Oct 9 07:55:43.598918 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 07:55:43.600573 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Oct 9 07:55:43.606646 systemd[1]: Started sshd@27-64.23.254.253:22-139.178.89.65:53478.service - OpenSSH per-connection server daemon (139.178.89.65:53478). Oct 9 07:55:43.609121 systemd-logind[1446]: Removed session 27. Oct 9 07:55:43.651662 sshd[4299]: Accepted publickey for core from 139.178.89.65 port 53478 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:55:43.653100 sshd[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:55:43.658750 systemd-logind[1446]: New session 28 of user core. Oct 9 07:55:43.665368 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 07:55:43.846722 kubelet[2486]: E1009 07:55:43.846405 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:43.848259 containerd[1472]: time="2024-10-09T07:55:43.848209880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6mw6,Uid:05f8be6c-66b1-49ef-b905-f21cdcee57af,Namespace:kube-system,Attempt:0,}" Oct 9 07:55:43.880953 containerd[1472]: time="2024-10-09T07:55:43.880443862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:55:43.880953 containerd[1472]: time="2024-10-09T07:55:43.880599133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:55:43.880953 containerd[1472]: time="2024-10-09T07:55:43.880630929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:43.881658 containerd[1472]: time="2024-10-09T07:55:43.880894001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:55:43.906433 systemd[1]: Started cri-containerd-d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa.scope - libcontainer container d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa. Oct 9 07:55:43.949647 containerd[1472]: time="2024-10-09T07:55:43.949010349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6mw6,Uid:05f8be6c-66b1-49ef-b905-f21cdcee57af,Namespace:kube-system,Attempt:0,} returns sandbox id \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\"" Oct 9 07:55:43.950385 kubelet[2486]: E1009 07:55:43.950136 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:43.955109 containerd[1472]: time="2024-10-09T07:55:43.955062520Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 07:55:43.989449 containerd[1472]: time="2024-10-09T07:55:43.989379956Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96\"" Oct 9 07:55:43.990361 containerd[1472]: time="2024-10-09T07:55:43.990319457Z" level=info msg="StartContainer for \"ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96\"" Oct 9 07:55:44.026725 systemd[1]: Started cri-containerd-ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96.scope - libcontainer container ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96. Oct 9 07:55:44.067410 containerd[1472]: time="2024-10-09T07:55:44.066669362Z" level=info msg="StartContainer for \"ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96\" returns successfully" Oct 9 07:55:44.085635 systemd[1]: cri-containerd-ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96.scope: Deactivated successfully. Oct 9 07:55:44.139943 containerd[1472]: time="2024-10-09T07:55:44.139760312Z" level=info msg="shim disconnected" id=ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96 namespace=k8s.io Oct 9 07:55:44.139943 containerd[1472]: time="2024-10-09T07:55:44.139872200Z" level=warning msg="cleaning up after shim disconnected" id=ac019cf3632f97256b243cb922fe305d528c1d9b4d4f67b42d1132c90d904c96 namespace=k8s.io Oct 9 07:55:44.139943 containerd[1472]: time="2024-10-09T07:55:44.139883319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:44.398530 kubelet[2486]: E1009 07:55:44.398460 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:44.820706 kubelet[2486]: E1009 07:55:44.819247 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:44.825564 containerd[1472]: time="2024-10-09T07:55:44.825516386Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 07:55:44.848728 containerd[1472]: time="2024-10-09T07:55:44.848662382Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062\"" Oct 9 07:55:44.850391 containerd[1472]: time="2024-10-09T07:55:44.850344314Z" level=info msg="StartContainer for \"ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062\"" Oct 9 07:55:44.901399 systemd[1]: Started cri-containerd-ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062.scope - libcontainer container ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062. Oct 9 07:55:44.941161 containerd[1472]: time="2024-10-09T07:55:44.940986190Z" level=info msg="StartContainer for \"ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062\" returns successfully" Oct 9 07:55:44.950605 systemd[1]: cri-containerd-ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062.scope: Deactivated successfully. Oct 9 07:55:44.977416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062-rootfs.mount: Deactivated successfully. Oct 9 07:55:44.981736 containerd[1472]: time="2024-10-09T07:55:44.981029026Z" level=info msg="shim disconnected" id=ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062 namespace=k8s.io Oct 9 07:55:44.981736 containerd[1472]: time="2024-10-09T07:55:44.981496840Z" level=warning msg="cleaning up after shim disconnected" id=ffb5f97213c3b2bae4b2621970e923ce8c944e46765ee7268108a0ac7d1c6062 namespace=k8s.io Oct 9 07:55:44.981736 containerd[1472]: time="2024-10-09T07:55:44.981512816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:45.520445 kubelet[2486]: E1009 07:55:45.520382 2486 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 07:55:45.826280 kubelet[2486]: E1009 07:55:45.825925 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:45.833154 containerd[1472]: time="2024-10-09T07:55:45.830447528Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 07:55:45.859839 containerd[1472]: time="2024-10-09T07:55:45.859289534Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643\"" Oct 9 07:55:45.862637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202511910.mount: Deactivated successfully. Oct 9 07:55:45.868706 containerd[1472]: time="2024-10-09T07:55:45.863375062Z" level=info msg="StartContainer for \"546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643\"" Oct 9 07:55:45.925461 systemd[1]: Started cri-containerd-546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643.scope - libcontainer container 546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643. Oct 9 07:55:45.991797 containerd[1472]: time="2024-10-09T07:55:45.991739342Z" level=info msg="StartContainer for \"546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643\" returns successfully" Oct 9 07:55:45.998505 systemd[1]: cri-containerd-546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643.scope: Deactivated successfully. Oct 9 07:55:46.038599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643-rootfs.mount: Deactivated successfully. Oct 9 07:55:46.044606 containerd[1472]: time="2024-10-09T07:55:46.044140056Z" level=info msg="shim disconnected" id=546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643 namespace=k8s.io Oct 9 07:55:46.044606 containerd[1472]: time="2024-10-09T07:55:46.044243732Z" level=warning msg="cleaning up after shim disconnected" id=546d27fd55d297aec4be670431befceb0982bb3b08100d2810b1a56dba30b643 namespace=k8s.io Oct 9 07:55:46.044606 containerd[1472]: time="2024-10-09T07:55:46.044259624Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:46.834232 kubelet[2486]: E1009 07:55:46.833284 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:46.841170 containerd[1472]: time="2024-10-09T07:55:46.837829224Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 07:55:46.859417 containerd[1472]: time="2024-10-09T07:55:46.859354007Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7\"" Oct 9 07:55:46.861424 containerd[1472]: time="2024-10-09T07:55:46.860108081Z" level=info msg="StartContainer for \"9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7\"" Oct 9 07:55:46.913725 systemd[1]: run-containerd-runc-k8s.io-9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7-runc.iOjiu4.mount: Deactivated successfully. Oct 9 07:55:46.924464 systemd[1]: Started cri-containerd-9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7.scope - libcontainer container 9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7. Oct 9 07:55:46.956880 systemd[1]: cri-containerd-9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7.scope: Deactivated successfully. Oct 9 07:55:46.963768 containerd[1472]: time="2024-10-09T07:55:46.963618193Z" level=info msg="StartContainer for \"9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7\" returns successfully" Oct 9 07:55:46.992629 containerd[1472]: time="2024-10-09T07:55:46.992515156Z" level=info msg="shim disconnected" id=9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7 namespace=k8s.io Oct 9 07:55:46.992629 containerd[1472]: time="2024-10-09T07:55:46.992583939Z" level=warning msg="cleaning up after shim disconnected" id=9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7 namespace=k8s.io Oct 9 07:55:46.992629 containerd[1472]: time="2024-10-09T07:55:46.992592401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:55:47.714511 kubelet[2486]: I1009 07:55:47.714423 2486 setters.go:600] "Node became not ready" node="ci-4081.1.0-4-ec1af0061e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-09T07:55:47Z","lastTransitionTime":"2024-10-09T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 9 07:55:47.841517 kubelet[2486]: E1009 07:55:47.841216 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:47.846423 containerd[1472]: time="2024-10-09T07:55:47.846140567Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 07:55:47.857322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fc5627cb19a4c84dbe900ce5cab8d27920956fc69281ebae6f062a57834bdd7-rootfs.mount: Deactivated successfully. Oct 9 07:55:47.880837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561216277.mount: Deactivated successfully. Oct 9 07:55:47.889635 containerd[1472]: time="2024-10-09T07:55:47.889553921Z" level=info msg="CreateContainer within sandbox \"d81f756adbaa1b36977afe2d080d87e2c630b814ab093d302cd8cb52bd06cafa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b\"" Oct 9 07:55:47.890991 containerd[1472]: time="2024-10-09T07:55:47.890883018Z" level=info msg="StartContainer for \"3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b\"" Oct 9 07:55:47.925373 systemd[1]: Started cri-containerd-3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b.scope - libcontainer container 3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b. Oct 9 07:55:47.982570 containerd[1472]: time="2024-10-09T07:55:47.982411622Z" level=info msg="StartContainer for \"3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b\" returns successfully" Oct 9 07:55:48.427358 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 9 07:55:48.849364 kubelet[2486]: E1009 07:55:48.849291 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:49.853164 kubelet[2486]: E1009 07:55:49.853024 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:50.303827 kubelet[2486]: E1009 07:55:50.303670 2486 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:42262->127.0.0.1:46687: read tcp 127.0.0.1:42262->127.0.0.1:46687: read: connection reset by peer Oct 9 07:55:52.174879 systemd-networkd[1371]: lxc_health: Link UP Oct 9 07:55:52.189347 systemd-networkd[1371]: lxc_health: Gained carrier Oct 9 07:55:52.409879 systemd[1]: run-containerd-runc-k8s.io-3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b-runc.QmjjGe.mount: Deactivated successfully. Oct 9 07:55:53.358317 systemd-networkd[1371]: lxc_health: Gained IPv6LL Oct 9 07:55:53.849136 kubelet[2486]: E1009 07:55:53.848973 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:53.867210 kubelet[2486]: E1009 07:55:53.864843 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:53.880051 kubelet[2486]: I1009 07:55:53.879559 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6mw6" podStartSLOduration=10.879537195 podStartE2EDuration="10.879537195s" podCreationTimestamp="2024-10-09 07:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:55:48.889305882 +0000 UTC m=+103.619557406" watchObservedRunningTime="2024-10-09 07:55:53.879537195 +0000 UTC m=+108.609788736" Oct 9 07:55:54.867247 kubelet[2486]: E1009 07:55:54.866825 2486 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:55:59.015956 systemd[1]: run-containerd-runc-k8s.io-3b30268232993a174d58995fbf9a2bfb36b0021419fd7505f914f96718dba33b-runc.xJ4HiP.mount: Deactivated successfully. Oct 9 07:55:59.094918 sshd[4299]: pam_unix(sshd:session): session closed for user core Oct 9 07:55:59.099905 systemd[1]: sshd@27-64.23.254.253:22-139.178.89.65:53478.service: Deactivated successfully. Oct 9 07:55:59.102601 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 07:55:59.104295 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Oct 9 07:55:59.105507 systemd-logind[1446]: Removed session 28.